Get help from the best in academic writing.

Character Analysis of Daisy in the Great Gatsby free essay help online

Daisy Buchanan is Nicks cousin and is introduced to the story when Nick goes to her house for a visit. The house is a huge Georgian Colonial mansion situated in East Egg, overlooking the bay. She lives there with her husband, Tom, and her 3 month old daughter. It is clear from everything about them that they extremly rich and well off, but their money has made them arrogant. They feel that they, espically Tom, are better and more suprior than eveyone else and look down on and condesend to anyone below them in wealth and scoial standing.

When Nich arrives at the house he is meet by Tom standing dominatly on the steps up to the house. He leads Nick into the sitting room where he finds Daisy and Jordan Baker, who is in many ways an unmarried version of Daisy, dressed all in white, sitting on an “enormous couch….. buoyed up as though upon an anchored balloon….. rippling and fluttering as if it had just been blown back in after a short flight around the house. ” From this moment, Daisy becomes like an angel on earth. She is routinely linked with the color white, always at the height of fashion and addressing people with only the most endearing terms.

She appears pure in a world of cheats and liars. As the visit goes on we learn more and more about here we begin to notice her charchteristics and personality. We notice her voice which is metioned as “thrilling”, “glowing” and “singing”. She seems friendly and happy to meet Nick and talk to him about his life. But as the chapter goes on we learn otherwise. Although Daisy stands in stark contrast to her husband, Tom, she is frail and diminutive, and actually labors at being shallow. She laughs at every opportunity. Daisy is utterly transparent, feebly affecting an air of worldliness and cynicism.

Though she breezily remarks that everything is in decline, she does so only in order to seem to agree with her husband. She and Jordan are dressed in white when Nick arrives, and she mentions that they spent a “white girl-hood” together; the ostensible purity of Daisy and Jordan stands in ironic contrast to their actual decadence and corruption. But there’s certainly something about Daisy that makes her special. She’s not like any of the other woman. What is it about her that’s so different, so thrilling, so intriguing? Of course, shes beautiful – in her hometown of Louisville, she was always the belle of the ball.

She’s also fun-loving and something of a flirt. Her conversation is charmingly sassy and delightfully frivolous. Even Nick, her cousin, can’t help but be taken in by Daisy’s many charms. But simply being charming isn’t enough to make Daisy stand out from the crowd. There is something else that makes her special and different. There are many reasons why daisy is found so attaractive, from her voice to her physical beauty. Her physical beauty can be seen from the fact that Tom, being so arrogant and competitive, would not have settled for any thing else than the most beautiful girl he could find.

The real problem is that Daisy isn’t really some mythical, divine creature. She’s ultimately a real, living, breathing woman, who’s flawed, just like the rest of us. Daisy is used to her life being a certain way – she follows certain rules, she expects certain rewards. Daisy is in love with money, ease, and material luxury. She canot live with out it. Everything she does gives of an air of upper class ven if she herself is quite crass. She seems to hide behind her money, being in a “distinguished secret society to which she and Tom belonged”.

Marketing Strategy of Coca Cola in India extended essay help biology: extended essay help biology

INDIA India has a GDP of over USD 1. 236 trillion (2009 estimate), the 12th largest in the world (4th largest in terms of GDP on purchasing power parity basis of USD 3. 57 trillion) and per capita income of just USD 1,100 (136th in the world). Even during recessionary period of the last two years, economy has been growing at over 7%, 2nd fastest in the world (after China), that means its average income will double within 10 years. Demographic Trends India’s population stood at 1. 157 billion (July 2010 estimate), growing at a rate of 1. 07% (2010 estimate) with 64. 3% of total population accounting for people between age bracket of 15-64 years and 30. 5% below 15 years of age. Sex ratio of total population remains at 1. 08 males to one female (2010 estimate). Following are the changes being observed in the structure of Indian population and its impact on the companies targeting the Indian consumers over the last decade: * Women are increasing becoming career-oriented and prefer to have children at a later stage in life.

Especially in urban areas, there has been a tendency to have maximum one or two children which resulted in parents spending more per child (leading to better quality of life and education) and has allowed women to stay at work longer (increasing household incomes and encouraging the purchase of labor or time saving products). * Alongside a declining number of children has been a decline in the average size of households. The growth of small size households has had numerous marketing implications, ranging from an increased demand for smaller units of housing and smaller size but better quality of clothing and groceries.

Social, Cultural and Lifestyle Developments In the last decade, India has seen more global influences than ever before. Technology has brought the diverse nation get closer, leading to evolution of communication patterns and reducing the bridges between urban and rural areas. This cultural shift has definite impacts on the Indian work scenario. Existing companies have redefined their strategies and companies have realized the importance of understanding the Indian playground in depth. Following are the key trends observed in socio-cultural environment f India, which give insights into new aspects of thought and communication that largely drive the nation today. * The role of women in society is changing as men and women increasingly share expectations in terms of employment and household responsibilities. For example, products like ready prepared meals will have more scope for growth which will relieve working women of their traditional role in preparing household meals. * Leisure is becoming a bigger part of many people’s lives, and companies have responded with a wide range of leisure related goods and services. Greater life expectancy is leading to an ageing of population and a shift to an increasingly elderly culture. Or at this time with 64% of the population being in the age of 15-40 years the Indian market is considered a youth market. Technology changing lives of Indian Consumers The pace of technological change is becoming increasingly rapid in the Indian consumers market, allowing new goods and services to be offered to consumers (for example, Internet banking and mobile telecoms).

New technology can allow existing products to be made more cheaply, thereby widening the market for such goods by enabling prices to be lowered. In this way, more efficient and low cost airlines have allowed new markets for domestic air travel to develop. Outlook: We have looked at socio-cultural implications of Indian Consumers during the last decade. Importantly, steps in the right directions need to be taken immediately to avail of the existing opportunity to boost economic growth and enhance the socio-cultural landscape of India.

CASE – 1 (MANUFACTURER) Changing Consumer Demand: Coca Cola’s strategy to exploit India’s Rural Markets during last decade Coca-Cola’s operations in India: Coca-Cola, world’s largest seller of soft drink concentrates since 1886, returned to India in 1993 after a 16 year hiatus, giving a new foot-print to the Indian soft drink market. Coca-Cola has made significant investments to build up its business in India, including new production facilities, waste water treatment plants, distribution systems, and marketing channels.

Coca-Cola India is among India’s top international investors, having invested more than USD 1. 1 billion in India since its entry in 1993. Decision to explore Indian Rural markets: During 2002, Coca-Cola came out with an idea to explore India’s Rural Markets, in an order to increase its sales volumes and gain overall market share in the country. This decision was not surprising, given the huge size of the untapped rural market in India and with flat sales in the urban areas, it was clear that Coca-Cola would have to shift its focus to the rural market.

However, the poor rural infrastructure and consumption habits that are very different from those of urban people were two major obstacles to cracking the rural market for CCI. Because of the erratic power supply most grocers in rural areas did not stock cold drinks. Also, people in rural areas had a preference for traditional cold beverages such as ‘Lassi’ (Yogurts) and lemon juice. Further, the price of the beverage was also a major factor for the rural consumer. Coca-Cola’s Rural Marketing Strategy: Coca-Cola’s rural marketing strategy was based on three A’s – Availability, Affordability and Acceptability.

The first ‘A’ – Availability emphasized on the availability of the product to the customer; the second ‘A’ – Affordability focused on product pricing, and the third ‘A’- Acceptability focused on convincing the customer to buy the product. 1) Availability When Coca-Cola entered the rural market, it focused on strengthening its distribution network there. It realized that the centralized distribution system used by the company in the urban areas would not be suitable for rural areas. In the centralized distribution system, the product was transported directly from the bottling plants to retailers. 2) Affordability

Coca-Cola conducted a survey of Indian Rural Markets (in an order to frame its marketing strategy) and came out with a conclusion that 300 ml bottles (primarily being sold in urban areas) were not popular with rural and semi-urban residents where two persons often shared a 300 ml bottle. It was also found that the price of Rs 10 (15 cents) per bottle was considered too high by rural consumers. For these reasons, Coca-Cola decided to make some changes in the size of its bottles and pricing to win over consumers in the rural market and 200 ml bottles, called ‘Chota Coke’ (small coke) priced at Rs 5 (7 cents).

Coca-Cola announced that it would push the 200 ml bottles more in rural areas, as the rural market was very price-sensitive and was sure that it would increase the rate of consumption in rural India (as evident from the fact that rural sales accounted for over 50% of Coca-Cola’s total sales in 2003). 2) Acceptability The initiatives of Coca-Cola’s in distribution and pricing were supported by extensive marketing in the mass media as well as through outdoor advertising. The company put up hoardings in villages and painted the name Coca Cola on the compounds of the residences in the villages.

Company had also set up temporary retail outlets and participated in fairs and traditional local gathering (primarily on festivals) which are major sources of business activity and entertainment in rural India. The Upshot Coca-Cola’s marketing initiatives were proved to be very successful, and as a result, its rural penetration increased from 9% in 2001 to 35% in 2003 and 54% in 2009 as Coca-Cola continued to add more villages to its distribution network in rural markets. CASE – 2 (SERVICE PROVIDER) Changing consumer demand: Bharatmatrimony. com to tap India’s growing online matrimonial services market An Overview of the Company: Institution of marriage holds a significant moment in the life of an Indian” Mr. Janakiraman, CEO – Bharatmatrimony. com BharatMatrimony. com has grown into a leading name in online matrimonial market within last decade in India. Company has successfully innovated the concept of match making on the Internet, connecting millions of marriage aspirants across India and World. Company has been making profits right from the beginning (net profits of USD 3. 6 million in 2009) and is currently growing at the rate of 300 % per annum. Key Milestones: BharatMatrimony. om has over 10 million members worldwide and been recognized by the Limca Book of Records for having the highest number of documented marriages. The site has also been awarded “The Best Matrimony Portal” by PC World, a leading technology magazine. Change in demand for matrimonial/match-making services from traditional means (marriage bureaus/brokers, newspaper advertisements and family/friend networks) to online services: India’s online match making market is worth about USD 20 million presently, a small fraction of around USD 500 million spent on traditional matrimonial services.

It is estimated that there are around 450 million people in India currently below the age of 21 and with over 300 million people estimated to get married in the next 30 years in India, matrimonial services is a fast growing market in India and online match-making concept is still remains untapped and expected to grow at an annual rate of 50% to 70% in the coming 5-7 years. Growing penetration of internet especially among young Indians: India has one of the youngest population pool and the fifth largest Internet Population in the world with present figure crossing 40 million online users, which is estimated to grow further in a big way.

Matrimonial websites are increasingly turning into a better option for the younger generation in their search for the perfect life partner. India has low level of Internet penetration compared to other countries, many of whom have seen, as a result, the growth of a large number successful Internet businesses over the past decade. Bharatmatrimony’s innovative idea of promoting the match-making through internet: The company has got the first mover advantage since they were the ones who started this online matrimony segment in India.

On the socio-cultural front, the dominant tradition is that of arranged marriages, where the parents or family elders find a suitable match for the young adults. Though matrimonial portals are a fairly recent phenomenon, the trend has picked up. Match the demographics and the tradition of arranged marriages and there is clearly a huge market for match-making – whatever the medium. With its reach, convenience, speed and relative privacy, the Internet provides a superior alternative to any other medium.

Users need to simply log on to a matrimonial portal and upload their profiles, sharing as much or as little information as they choose. They can then search for partners according to their individual preferences. Non-resident Indians (NRIs) are zeroing in on their dream partners through the various tools that can be accessed via internet and proving to be a big draw for NRIs, living in the US, UK, Middle East, Australia ; New Zealand. Bharatmatrimony. com is the first portal in India to offer

Voice-based matrimony services, which allow users to record, listen and reply to any profile using their mobile phones. The Web site has also taken the lead in offering Real-Time online Horoscope Matching and presently offers this service in nine regional languages. The Road Ahead: Despite the presence of many online matrimonial portals, there is a vast majority of people following the traditional means of finding a soul-mate and the reason behind this is present social norms and low internet penetration rate in India.

The online portals are mostly used by people living in metro cities but with expected rapid increase of internet users in urban and semi-urban areas, online matrimonial services are expected to grow at 50% annually in the near future. (Please Turn Over to Next Page) PART – B (Answers to Questions 1) Environmental Scanning Strategy Followed by United Spirits Limited Information regarding environmental scanning conducted by any company remains a high degree of proprietary records and hence, I will not be able to discuss the environmental scanning process conducted by my employer.

However, I was able to collect some data/information regarding United Spirits Limited and same is being discussed here. Brief Overview of Company: United Spirits Limited (UB Group) is India’s largest manufacturer of alcoholic beverages (beer brand is known as ‘Kingfisher’) and second largest in the world with sales of USD 1. 12 billion for FY ending 31/03/2010. THE ENVIRONMENT SCANNING PROCESS Environmental scanning is the foundation for strategic thinking and planning. True scanning breaks out of the internal focus and limiting paradigms that keep us from seeing and understanding the driving forces in the environment.

Environmental scanning is not fortune telling. We can’t predict the future, but we can prepare! United Spirits has analyzed the following factors while researching the external environment for beer markets in India: SOCIAL ; DEMOGRAPHIC FACTORS: India’s population is 1. 236 billion (2009 estimate), second largest after China. India’s young emerging middle class (64% of country’s population is between ages of 15-64 years) offers tremendous upside growth potential in urban areas.

The economy is growing rapidly and personal disposable income is rising, especially among young Indians working in the services sector like IT/software, banking/financial services, telecoms and consulting. ECONOMIC FACTORS: India is one of the world’s fastest growing consumer markets. A rapidly growing population, an emerging middle class with rising per? capita incomes and blossoming urban centers make India a powerful emerging market. Moreover, Favorable agricultural climate ; network: India’s climate is favorable for the harvesting of hops and barley, the primary natural ingredients in beer.

There exists potential to establish supply relationships with local commodity producers. Steady growth in India’s beer market: India has an active domestic brewing industry with many local firms and rising foreign investment. Although per? capita beer consumption is relatively low presently but younger generations (characterized by westernized culture) have the potential to be high? volume consumers. According to a report on beer market in India, from ‘MindBranch’ (a research firm based in US), beer sales in India are forecast to grow at a compound annual growth rate of 17. 2% until 2012.

COMPETITIVE FACTORS: Although market saturation is only a marginal concern, it is worth noting that a rush of foreign investment, combined with a healthy domestic brewing industry, makes India a highly competitive emerging market. POLITICAL AND LEGAL FACTORS: A tangled web of tax and regulations across Indian states (India has 28 states and 7 union territories) remains a major barrier to beer market growth in the country. Different regulations on pricing and distribution, as well as fluctuating excise duties, foster inefficiencies and makes harder for brewers to attract consumers.

Transporting beer is expensive and each state levies taxes on alcohol at its own determined rates. It is a state? by? state market rather than a national market and taxes are levied at higher rates on all alcoholic products crossing the state borders. This makes it essential for brewers to have production facilities in different states (rather than centralized production facilities and gaining economies of scale). OUTLOOK: Despite its challenges, India’s young emerging middle class and favorable agricultural climate make this an attractive expansion opportunity.

Many international players acknowledge India as a largely untapped market with strong growth potential. Answer to Question: 2 (Part B) United Spirit’s Environmental Scanning Strategy: Evaluating its Success (I could collect very limited information available to measure the success of United Spirit’s Environmental Scanning Strategy, being proprietary information for the company) Successful environmental scanning alerts the organization to critical trends and events before the changes have developed a discernible pattern and before competitors recognize them.

Otherwise, the firm may be forced into a reactive mode instead of being proactive. United Spirits Limited has been very successful in implementing its Environmental Scanning Strategy for beer markets in India as evident from the following facts: 1) United Spirits reported a significant 79% growth in total sales in the last three financial years, from USD 625 million in FY2006-07 to USD 1. 12 billion in FY2009-10. 2) Company is already largest spirits manufacturing company in India and became second largest manufacturer in the world in 2010. ) Company’s portfolio of spirits brands is valued at over USD 5 billion 4) Company is a Market Leader with over 59% market share of India Spirits Business 5) Company has sold over 100 million cases (3. 6 billion bottles) in the year 2009-10 6) Portfolio comprises of a wide range of Brands including 20 Millionaire Liquor Brands Suggestions for further improvement in the environmental scanning strategy adopted by United Spirits: Focus on small cities and semi-urban areas of India:

Environmental scanning should now be focused on smaller towns and cities, as economic growth has continued to spill over from the major cities into smaller ones, the alcoholic drinks industry has been buoyed by strong growth in the consumption of economy beer drinks in smaller cities in less developed states, such as Orissa, Bihar and Madhya Pradesh. Product Portfolio to incline towards niche brands With continued expansion in the consumer base for niche, international and premium products, manufacturers and importers stepped up their activity in niche categories ranging from champagne to dark beer.

The spate of international brand launches continued in 2009, with the entry of several brands, including Korbel Champagne (Brown-Forman Corp), and Teacher’s Origin blended Scotch whisky, by Beam Global Spirits ; Wine. Niche concepts gained traction recently, for example, the opening of Beer Garden – a microbrewery from Rockman Group in 2008 and ‘Howzzat’ – a cricket themed microbrewery opened in 2009. To explore high-end supermarkets for distribution of premium products Rise of chained outlets in the on-trade is also expected to provide opportunities for manufacturers to connect with consumers through promotions and tie-ups.

Volume sales of alcoholic drinks through supermarkets witnessed a strong rise in 2008 and 2009. This provided a point of contact for manufacturers to target consumers with premium products, such as imported lager and wine. Outlook: United Spirits is expected to gain momentum as demand for branded alcoholic drinks is expected to continue to rise in near future with a double digit growth rate. A sound environmental scanning strategy can definitely provide a first mover advantage to United Spirits and gain market share.

References List: 1) CIA World Fact-book (India page) 2) “India at a Glance”. Know India Portal. National Informatics Centre(NIC). 3) India Country Profile from BBC (www. bbc. co. uk) 4) Moody’s credit rating and analysis report on India 5) “Interface between urban and rural development in India”. In Dutt, Ashok K. ; Thakur, Baleshwar. City, Society, and Planning: Planning Essays in honour of Prof. A. K. Dutt. Concept Publishing 6) http://en. wikipedia. org/wiki/India 7) http://www.

Coca-Colaindia. com 8) http://www. cokefacts. org/facts/facts_in_keyfacts. shtml 9) http://en. wikipedia. org/wiki/Coca-Cola 10) Various news and articles available on online archives of Indian newspapers (Times of India, Economic Times, Business Standard and Financial Express) 11) Report on India’s online matrimonial search from M/S Juxt-Consult Online Research ; Advisory Company (an online research consultancy firm based in India) 12) http://www. bharatmatrimony. com 3) Modern Indian Culture and Society – an article by Knut Jacobsen (http://media. routledgeweb. com/pdf/9780415452199/9780415452199. pdf) 14) Information on online matrimonial services from Alexa, a web information company (www. alexa. com) 15) India Ministry of Information and Broadcasting (2009). India: a reference annual. New Delhi: Govt. of India 16) ComScore report on social networking sites in India (http://www. gauravonomics. com/blog/comscore-report-on-socialnetworking- sites-in-india) 7) India Wedding Planner (www. indiaweddingplanner. com) 18) “Beer market in India”, a report from MindBranch, a research firm based in US (www. mindbranch. com) 19) CIA World Fact-book (India page) 20) ‘About us’ and financial information pages of www. unitedspirits. in 21) Brewery Magazine http://www. breweryage. com/industry/ 22) Beer in India. http:// http://en. wikipedia. org/wiki/Beer_in_India 23) http://www. beerinstitute. org/statistics. asp? sid=2 24)

Catcher in the Rye college essay help los angeles: college essay help los angeles

AP Language and Composition July 30th, 2011 The Catcher in the Rye The Catcher in the Rye, is a novel about a young boy named Holden Caulfield who gets kicked out of Pency Prep for poor academic performance, and must make a journey home. This novel is narrated by Holden himself, as you get a chance to view the world through his eyes as he deals with issues of finding out whom he is, and as he tries to make connections with people throughout his journey. The style of The Catcher in the Rye is very distinctive, and Holden talks directly to the reader.

Some words are italicized to show the importance of and put emphasis on the word. There is also an abundance of swear words throughout the novel. The choice of diction that the author, J. D. Salinger, chooses shows just how young and immature Holden actually is. The tone of this book is very depressing, judgmental, and rambling but on the other hand it has its humorous and warmhearted moments. The tone is depressing and judgmental because Holden uses phrases such as “That’s depressing. ” or “That’s phony. ” to describe nearly every little thing.

He also rambles and digresses very often throughout the book, like when he talks about “Allie’s baseball mitt”, or “playing checkers with Jane”. These short little stories don’t necessarily play a huge role in the novel, but are of great importance to him, and shows us what kind of person he really is and what he cherishes most. His humorous and warmhearted tone comes out mostly when he talks about his little sister, Phoebe, and how much he cares for her, or his late brother, Allie and how much he misses him.

There were a few other rare moments when you see just how compassionate Holden can be, for instance, his encounter with Sunny. Instead of copulating with her, he chooses to sit and ask her about herself, thus showing compassion towards her. Towards the conclusion of the novel, something surprising happens. He confesses that he misses everybody he talked about, “even old Stradlater and Ackley”, maybe he reveals this to us because he currently resides in a mental institute or maybe because deep down he never really found these people depressing at all.

Hrm Paper easy essay help: easy essay help

HRM Paper Week 8 •Assignment # 2 – Comprehensive Case: “Muffler Magic” Read the “Muffler Magic” case and write a four-to-five (4-5) page report that answers the following: 1. Specify three (3) recommendations about the functions of recruiting, selection, and training that you think Ron Brown should be addressing with his HR manager now. Currently you’re allowing your HR to hire employees without “carefully screening each and every candidate, checking their references and work ethic” due to such a high demand of staff.

Envitably, you’re higher mediocre applicants for more than mediocre pay and at the risk of your name and overall profitability. Being able to answer minimal questions shouldn’t be enough to be hired as a technician and questions such as “what do you think the problem is if a 2001 Camery is overheating? What would you do? ” should not be enough to secure a position within the company. Muffler Magic offers a range of products and services and engine issues is merely one of the many situations an employee may come across.

How do these types of generic questions answer if your applicant is able to fulfill the requirements for “muffler replacements, oil changes, and brake jobs”? Obviously, from looking at the handful of situational mishaps you’ve described your HR department is merely hiring whoever walks into the office and in return you’re given inaccurate and potentially life threatening break jobs and repairs out of the companies pocket.

This is not acceptable and it is no wonder why the company isn’t profiting. One of the reasons behinds why you don’t necessarily want to adapt or change some crucial points within the company is the money. If you broke down one instance where there was an error made by one of your associates, take the engine for instance a new engine can cost any consumer somewhere in the ballpark of $2,000 to $4,000*–not including the benefits or any extra perks.

Now lets say that one of these errors happened in every single store then you’re looking at $50,000+ worth of mistakes coming out of Magic Mufflers pocket (keep in mind that estimated figure is from 1 mistake). With that type of money, I would imagine you could hire and appropriately train quite a few applicants that would be worth your time and money. I would recommend changing your recruiting, selecting and training standards immediately. Starting with the recruiting aspect of Magic Muffler.

Instead of allowing the applicants come to you, why don’t we go above and beyond and seek the preferred applicant. We can still advertise through local newspapers and internet, but we really should be seeking out those employees that have some kind of responsibility and potential retainability. The one major thing I didn’t see in the recruiting process you’re currently using is zoning in on what type of candidate are you looking for–in terms of education level and experience level based upon the types of work they will be working on.

One of the huge factors to remember is “Presently, vehicles use high-tech computers and complex electronic systems to monitor the performance of the vehicle. A strong sense of understanding concerning the operation of a vehicle, including how each device interacts, as well as the ability to deal with electronic diagnostic equipment and digital reference manuals is key to the success of a technician”(http://www. careeroverview. com/auto-mechanic-careers. tml) Therefore, Magic Muffler is in need of a qualified individual that is capable of working with UTD automotive machinery and possible situations that could arise. Therefore Magic Muffler should be spending their money recruiting individuals that “have successfully completed a vocational training program in automotive service technology”(ie:Automotive Youth Education Service (AYES)). For a more advanced position they will need ,in addition to vocational training, stoma kind of “Postsecondary automotive technician training” whether through a prior company, community college or technical college.

Finally other “qualifications you should be focused on while recruiting is “the ability to diagnose the source of a problem quickly and accurately, good reasoning ability and a thorough knowledge of automobiles, strong communication and analytical skills and good reading, mathematics, and computer skills to study technical manuals” with the drive to continuously keep up with new technology and learn new service and repair procedures and specifications. ” To find these types of applicants I would recommend some type of college recruiting; starting with on campus recruiting and then continuing the recuriting process with an onsite visit.

Continuing with the selection process, I think its quite obvious that we should be focusing on a Personality Profile Analysis, which applicants can perform online and follow this up with a PPA(200 HRM BOOK). If you chose not to go that route you can always focus on tests of cognitive abilities (more specifically aptitude testing and motor/physical abilities). If these test’s pan out then we should go forward with a background check/reference check. This may seem to be an overwhelming process, but finding the perfect candidates is essential to low turnover rates and high satisfaction level across the board.

The next step is to select the applicants that you are satisfied with their performance on the tests, interview and background check. After applicants are chosen and hired, we need to start with an orientation of the company and its overall goals and next is training. Although OTJ training does offer a lot to the employee it is not enough for these types of positions. Considering car technology is constantly advancing there needs to a need to continuously further your mechanics knowledge.

As a responsible employer you should send your “experienced automotive service technicians to manufacturer training centers to learn to repair new models or to receive special training in the repair of components, such as electronic fuel injection or air-conditioners” and even beginner mechanics who show potential may be sent “to manufacturer-sponsored technician training programs to upgrade or maintain employees’ skills”. There are of course crucial training necessary, which cannot be offered OTJ and that is electronics training.

This is vital because electrical components, or a series of related components, account for nearly all malfunctions in modern vehicles”. As the employee continues to thrive the company should offer additional training for possible certifications or advancement opportunities. For example: the “ASE certification has become a standard credential for automotive service technicians. While not mandatory for work in automotive service, certification is common for all experienced technicians in large, urban areas.

Certification is available in eight different areas of automotive service, such as electrical systems, engine repair, brake systems, suspension and steering, and heating and air-conditioning. For certification in each area, technicians must have at least 2 years of experience and pass the examination. Completion of an automotive training program in high school, vocational or trade school, or community or junior college may be substituted for 1 year of experience. For ASE certification as a Master Automobile Technician, technicians must pass all eight examinations”. http://www. ehow. com/facts_4830630_cost-car-engine-replacement. html 2. Write three (3) questions for a structured interview form that Ron Brown’s service center managers can use to interview experienced technicians. (Note: do not list possible answers. ) As I had said previously asking generic questions are not going to offer you the results in which most employers desire. There are a couple of things that should be kept in mind when creating these questions such as; which type of questions would be more effective in displaying the qualities Muffler Magic desires?

Considering HR already has a lot to do with the hiring process, I think the appropriate form of interview would be a structured situational interview. After analyzing the positions and rating the jobs main duties, we would need to create questions reflecting such duties and daily knowledge to perform them. Three questions I would use to “test the waters” would be: What training(classroom or on the job), have you had with engine, transmission or brake diagnostic equipment? Identify the diagnostic program and was it computer and software based?

Have you worked with engine, transmission or brake diagnostic equipment – computer and software? What was the diagnostic program and what was your involvement? What experience, knowledge, and skill do you have with air brake systems, anti lock, and heavy-duty truck suspensions? Relate your experience and describe your skills working with school bus, heavy-duty trucks, light duty pick-up truck, and van bodies/Relate your experience and describe your skills working with heavy and medium-duty diesel and gasoline-powered engines and light-duty pick-up truck and van engines. (www. msbo. org/library/HumanRes/Interview/Mech. doc)

Overview of the Data Mining academic essay help: academic essay help

Order Code RL31798 CRS Report for Congress Received through the CRS Web Data Mining: An Overview Updated December 16, 2004 Jeffrey W. Seifert Analyst in Information Science and Technology Policy Resources, Science, and Industry Division Congressional Research Service ? The Library of Congress Data Mining: An Overview Summary Data mining is emerging as one of the key features of many homeland security initiatives. Often used as a means for detecting fraud, assessing risk, and product retailing, data mining involves the use of data analysis tools to discover previously unknown, valid patterns and relationships in large data sets.

In the context of homeland security, data mining is often viewed as a potential means to identify terrorist activities, such as money transfers and communications, and to identify and track individual terrorists themselves, such as through travel and immigration records. While data mining represents a significant advance in the type of analytical tools currently available, there are limitations to its capability. One limitation is that although data mining can help reveal patterns and relationships, it does not tell the user the value or significance of these patterns.

These types of determinations must be made by the user. A second limitation is that while data mining can identify connections between behaviors and/or variables, it does not necessarily identify a causal relationship. To be successful, data mining still requires skilled technical and analytical specialists who can structure the analysis and interpret the output that is created. Data mining is becoming increasingly common in both the private and public sectors.

Industries such as banking, insurance, medicine, and retailing commonly use data mining to reduce costs, enhance research, and increase sales. In the public sector, data mining applications initially were used as a means to detect fraud and waste, but have grown to also be used for purposes such as measuring and improving program performance. However, some of the homeland security data mining applications represent a significant expansion in the quantity and scope of data to be analyzed.

Two efforts that have attracted a higher level of congressional interest include the Terrorism Information Awareness (TIA) project (now-discontinued) and the Computer-Assisted Passenger Prescreening System II (CAPPS II) project (nowcanceled and replaced by Secure Flight). As with other aspects of data mining, while technological capabilities are important, there are other implementation and oversight issues that can influence the success of a project’s outcome. One issue is data quality, which refers to the accuracy and completeness of the data being analyzed.

A second issue is the interoperability of the data mining software and databases being used by different agencies. A third issue is mission creep, or the use of data for purposes other than for which the data were originally collected. A fourth issue is privacy. Questions that may be considered include the degree to which government agencies should use and mix commercial data with government data, whether data sources are being used for purposes other than those for which they were originally designed, and possible application of the Privacy Act to these initiatives.

It is anticipated that congressional oversight of data mining projects will grow as data mining efforts continue to evolve. This report will be updated as events warrant. Contents What is Data Mining? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 Limitations of Data Mining . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 Data Mining Uses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 Terrorism Information Awareness (TIA) Program . . . . . . . . . . . . . . . . . . . 5 Computer-Assisted Passenger Prescreening System (CAPPS II) . . . . . . . . . 7 Data Mining Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 Data Quality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 Interoperability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 Mission Creep . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 Privacy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 Legislation in the 108th Congress . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 For Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 Data Mining: An Overview What is Data Mining? Data mining involves the use of sophisticated data analysis tools to discover previously unknown, valid patterns and relationships in large data sets. These tools can include statistical models, mathematical algorithms, and machine learning methods (algorithms that improve their performance automatically through experience, such as neural networks or decision trees). Consequently, data mining consists of more than collecting and managing data, it also includes analysis and prediction. Data mining can be performed on data represented in quantitative, textual, or multimedia forms. Data mining applications can use a variety of parameters to examine the data.

They include association (patterns where one event is connected to another event, such as purchasing a pen and purchasing paper), sequence or path analysis (patterns where one event leads to another event, such as the birth of a child and purchasing diapers), classification (identification of new patterns, such as coincidences between duct tape purchases and plastic sheeting purchases), clustering (finding and visually documenting groups of previously unknown facts, such as geographic location and brand preferences), and forecasting (discovering patterns from which one can make reasonable predictions regarding future activities, such as the prediction that people who join an athletic club may take exercise classes). As an application, compared to other data analysis applications, such as structured queries (used in many commercial databases) or statistical analysis software, data mining represents a difference of kind rather than degree. Many simpler analytical tools utilize a verification-based approach, where the user develops a hypothesis and then tests the data to prove or disprove the hypothesis. For example, a user might hypothesize that a customer who buys a hammer, will also buy a box of nails. The effectiveness of this approach can be limited by the creativity of the user to develop various hypotheses, as well as the structure of the software being used.

In contrast, data mining utilizes a discovery approach, in which algorithms can be used to examine several multidimensional data relationships simultaneously, identifying those that are unique or frequently represented. For example, a hardware store may compare their customers’ tool purchases with home ownership, type of automobile driven, age, occupation, income, and/or distance between residence and Two Crows Corporation, Introduction to Data Mining and Knowledge Discovery, Third Edition (Potomac, MD: Two Crows Corporation, 1999); Pieter Adriaans and Dolf Zantinge, Data Mining (New York: Addison Wesley, 1996). For a more technically-oriented definition of data mining, [http://searchcrm. echtarget. com/gDefinition/0,294236,sid11_gci211901,00. html]. 2 1 see CRS-2 the store. As a result of its complex capabilities, two precursors are important for a successful data mining exercise; a clear formulation of the problem to be solved, and access to the relevant data. 3 Reflecting this conceptualization of data mining, some observers consider data mining to be just one step in a larger process known as knowledge discovery in databases (KDD). Other steps in the KDD process, in progressive order, include data cleaning, data integration, data selection, data transformation, (data mining), pattern evaluation, and knowledge presentation. A number of advances in technology and business processes have contributed to a growing interest in data mining in both the public and private sectors. Some of these changes include the growth of computer networks, which can be used to connect databases; the development of enhanced search-related techniques such as neural networks and advanced algorithms; the spread of the client/server computing model, allowing users to access centralized data resources from the desktop; and an increased ability to combine data from disparate sources into a single searchable source. 5 In addition to these improved data management tools, the increased availability of information and the decreasing costs of storing it have also played a role.

Over the past several years there has been a rapid increase in the volume of information collected and stored, with some observers suggesting that the quantity of the world’s data approximately doubles every year. 6 At the same time, the costs of data storage have decreased significantly from dollars per megabyte to pennies per megabyte. Similarly, computing power has continued to double every 18-24 months, while the relative cost of computing power has continued to decrease. 7 Data mining has become increasingly common in both the public and private sectors. Organizations use data mining as a tool to survey customer information, reduce fraud and waste, and assist in medical research. However, the proliferation of data mining has raised some implementation and oversight issues as well.

These include concerns about the quality of the data being analyzed, the interoperability of the databases and software between agencies, and potential infringements on privacy. Also, there are some concerns that the limitations of data mining are being overlooked as agencies work to emphasize their homeland security initiatives. 3 John Makulowich, “Government Data Mining Systems Defy Definition,” Washington Technology, 22 February 1999, [http://www. washingtontechnology. com/news/13_22/tech_ features/393-3. html]. Jiawei Han and Micheline Kamber, Data Mining: Concepts and Techniques (New York: Morgan Kaufmann Publishers, 2001), p. 7. 4 5 Pieter Adriaans and Dolf Zantinge, Data Mining (New York: Addison Wesley, 1996), pp. 5-6. Ibid. , p. 2.

Two Crows Corporation, Introduction to Data Mining and Knowledge Discovery, Third Edition (Potomac, MD: Two Crows Corporation, 1999), p. 4. 6 7 CRS-3 Limitations of Data Mining While data mining products can be very powerful tools, they are not selfsufficient applications. To be successful, data mining requires skilled technical and analytical specialists who can structure the analysis and interpret the output that is created. Consequently, the limitations of data mining are primarily data or personnelrelated, rather than technology-related. 8 Although data mining can help reveal patterns and relationships, it does not tell the user the value or significance of these patterns. These types of determinations must be made by the user.

Similarly, the validity of the patterns discovered is dependent on how they compare to “real world” circumstances. For example, to assess the validity of a data mining application designed to identify potential terrorist suspects in a large pool of individuals, the user may test the model using data that includes information about known terrorists. However, while possibly re-affirming a particular profile, it does not necessarily mean that the application will identify a suspect whose behavior significantly deviates from the original model. Another limitation of data mining is that while it can identify connections between behaviors and/or variables, it does not necessarily identify a causal relationship.

For example, an application may identify that a pattern of behavior, such as the propensity to purchase airline tickets just shortly before the flight is scheduled to depart, is related to characteristics such as income, level of education, and Internet use. However, that does not necessarily indicate that the ticket purchasing behavior is caused by one or more of these variables. In fact, the individual’s behavior could be affected by some additional variable(s) such as occupation (the need to make trips on short notice), family status (a sick relative needing care), or a hobby (taking advantage of last minute discounts to visit new destinations). 9 Data Mining Uses

Data mining is used for a variety of purposes in both the private and public sectors. Industries such as banking, insurance, medicine, and retailing commonly use data mining to reduce costs, enhance research, and increase sales. For example, the insurance and banking industries use data mining applications to detect fraud and assist in risk assessment (e. g. , credit scoring). Using customer data collected over several years, companies can develop models that predict whether a customer is a good credit risk, or whether an accident claim may be fraudulent and should be investigated more closely. The medical community sometimes uses data mining to help predict the effectiveness of a procedure or medicine.

Pharmaceutical firms use data mining of chemical compounds and genetic material to help guide research on new treatments for diseases. Retailers can use information collected through affinity programs (e. g. , shoppers’ club cards, frequent flyer points, contests) to assess the 8 9 Ibid. , p. 2. Ibid. , p. 1. CRS-4 effectiveness of product selection and placement decisions, coupon offers, and which products are often purchased together. Companies such as telephone service providers and music clubs can use data mining to create a “churn analysis,” to assess which customers are likely to remain as subscribers and which ones are likely to switch to a competitor. 0 In the public sector, data mining applications were initially used as a means to detect fraud and waste, but they have grown also to be used for purposes such as measuring and improving program performance. It has been reported that data mining has helped the federal government recover millions of dollars in fraudulent Medicare payments. 11 The Justice Department has been able to use data mining to assess crime patterns and adjust resource allotments accordingly. Similarly, the Department of Veterans Affairs has used data mining to help predict demographic changes in the constituency it serves so that it can better estimate its budgetary needs. Another example is the Federal Aviation Administration, which uses data mining to review plane crash data to recognize common defects and recommend precautionary measures. 2 Recently, data mining has been increasingly cited as an important tool for homeland security efforts. Some observers suggest that data mining should be used as a means to identify terrorist activities, such as money transfers and communications, and to identify and track individual terrorists themselves, such as through travel and immigration records. Two initiatives that have attracted significant attention include the now-discontinued Terrorism Information Awareness (TIA) project13 conducted by the Defense Advanced Research Projects Agency (DARPA), and the now-canceled Computer-Assisted Passenger Prescreening System II (CAPPS II) that was being developed by the Transportation Security Administration (TSA).

CAPPS II is being replaced by a new program called Secure Flight. Two Crows Corporation, Introduction to Data Mining and Knowledge Discovery, Third Edition (Potomac, MD: Two Crows Corporation, 1999), p. 5; Patrick Dillon, Data Mining: Transforming Business Data Into Competitive Advantage and Intellectual Capital (Atlanta GA: The Information Management Forum, 1998), pp. 5-6. George Cahlink, “Data Mining Taps the Trends,” Government Executive Magazine, October 1, 2000, [http://www. govexec. com/tech/articles/1000managetech. htm]. Ibid. ; for a more detailed review of the purpose for data mining conducted by federal departments and agencies, see U. S.

General Accounting Office, Data Mining: Federal Efforts Cover a Wide Range of Uses, GAO Report GAO-04-548 (Washington: May 2004). This project was originally identified as the Total Information Awareness project until DARPA publicly renamed it the Terrorism Information Awareness project in May 2003. Section 8131 of the FY2004 Department of Defense Appropriations Act (P. L. 108-87) prohibited further funding of TIA as a whole, while allowing unspecified subcomponents of the TIA initiative to be funded as part of DOD’s classified budget, subject to the provisions of the National Foreign Intelligence Program, which restricts the processing and analysis of information on U. S. citizens.

For further details regarding this provision, see CRS Report RL31805 Authorization and Appropriations for FY2004: Defense, by Amy Belasco and Stephen Daggett. 13 12 11 10 CRS-5 Terrorism Information Awareness (TIA) Program In the immediate aftermath of the September 11, 2001, terrorist attacks, many questions were raised about the country’s intelligence tools and capabilities, as well as the government’s ability to detect other so-called “sleeper cells,” if, indeed, they existed. One response to these concerns was the creation of the Information Awareness Office (IAO) at the Defense Advanced Research Projects Agency (DARPA)14 in January 2002.

The role of IAO was “in part to bring together, under the leadership of one technical office director, several existing DARPA programs focused on applying information technology to combat terrorist threats. ”15 The mission statement for IAO suggested that the emphasis on these technology programs was to “counter asymmetric threats by achieving total information awareness useful for preemption, national security warning, and national security decision making. ”16 To that end, the TIA project was to focus on three specific areas of research, anticipated to be conducted over five years, to develop technologies that would assist in the detection of terrorist groups planning attacks against American interests, both inside and outside the country.

The three areas of research and their purposes were described in a DOD Inspector General report as: “… language translation, data search with pattern recognition and privacy protection, and advanced collaborative and decision support tools. Language translation technology would enable the rapid analysis of foreign languages, both spoken and written, and allow analysts to quickly search the translated materials for clues about emerging threats. The data search, pattern recognition, and privacy protection technologies would permit analysts to search vast quantities of data for patterns that suggest terrorist activity while at the same time controlling access to the data, enforcing laws and policies, and ensuring detection of misuse of the information obtained.

The collaborative reasoning and decision support technologies would allow analysts from different agencies to share data. ”17 Each part had the potential to improve the data mining capabilities of agencies that adopt the technology. 18 Automated rapid language translation could allow DARPA “is the central research and development organization for the Department of Defense (DOD)” that engages in basic and applied research, with a specific focus on “research and technology where risk and payoff are both very high and where success may provide dramatic advances for traditional military roles and missions. ” [http://www. darpa. mil/] Department of Defense. 20 May 2003.

Report to Congress Regarding the Terrorism Information Awareness Program, Executive Summary, p. 2. Department of Defense. 20 May 2003. Report to Congress Regarding the Terrorism Information Awareness Program, Detailed Information, p. 1 (emphasis added). Department of Defense, Office of the Inspector General. 12 December 2003. Information Technology Management: Terrorism Information Awareness Project (D2004033). P. 7. It is important to note that while DARPA’s mission is to conduct research and development on technologies that can be used to address national-level problems, it would (continued… ) 18 17 16 15 14 CRS-6 analysts to search and monitor foreign language documents and transmissions more quickly than currently possible.

Improved search and pattern recognition technologies may enable more comprehensive and thorough mining of transactional data, such as passport and visa applications, car rentals, driver license renewals, criminal records, and airline ticket purchases. Improved collaboration and decision support tools might facilitate the search and coordination activities being conducted by different agencies and levels of government. 19 In public statements DARPA frequently referred to the TIA program as a research and development project designed to create experimental prototype tools, and that the research agency would only use “data that is legally available and obtainable by the U. S. Government. ”20 DARPA further emphasized that these tools could be dopted and used by other agencies, and that DARPA itself would not be engaging in any actual-use data mining applications, although it could “support production of a scalable leave-behind system prototype. ”21 In addition, some of the technology projects being carried out in association with the TIA program did not involve data mining. 22 However, the TIA program’s overall emphasis on collecting, tracking, and analyzing data trails left by individuals served to generate significant and vocal opposition soon after John Poindexter made a presentation on TIA at the DARPATech 2002 Conference in August 2002. 23 Critics of the TIA program were further incensed by two administrative aspects of the project. The first involved the Director of IAO, Dr. John M. Poindexter.

Poindexter, a retired Admiral, was, until that time, perhaps most well-known for his alleged role in the Iran-contra scandal during the Reagan Administration. His involvement with the program caused many in the civil liberties community to (… continued) not be responsible for the operation of TIA, if it were to be adopted. For more details about the Terrorism Information Awareness program and related information and privacy laws, see CRS Report RL31730, Privacy: Total Information Awareness Programs and Related Information Access, Collection, and Protection Laws, by Gina Marie Stevens, and CRS Report RL31786, Total Information Awareness Programs: Funding, Composition, and Oversight Issues, by Amy Belasco.

Department of Defense, DARPA, “Defense Advanced Research Project Agency’s Information Awareness Office and Total Information Awareness Project,” p. 1, [http://www. iwar. org. uk/news-archive/tia/iaotia. pdf]. 21 22 20 19 18 Ibid. , p. 2. Although most of the TIA-related projects did involve some form of data collection, the primary purposes of some of these projects, such as war gaming, language translation, and biological agent detection, were less connected to data mining activities. For a description of these projects, see [http://www. fas. org/irp/agency/dod/poindexter. html]. The text of Poindexter’s presentation is available at [http://www. darpa. mil/DARPATech2002/presentations/iao_pdf/speeches/POINDEXT. pdf].

The slide presentation of Poindexter’s presentation is available at [http://www. darpa. mil/DARPATech2002/presentations/iao_pdf/slides/PoindexterIAO. pdf]. 23 CRS-7 question the true motives behind TIA. 24 The second source of contention involved TIA’s original logo, which depicted an “all-seeing” eye atop of a pyramid looking down over the globe, accompanied by the Latin phrase scientia est potentia (knowledge is power). 25 Although DARPA eventually removed the logo from its website, it left a lasting impression. The continued negative publicity surrounding the TIA program contributed to the introduction of a number of bills in Congress that eventually led to the program’s dissolution. Among these bills was S. 88, the Data-Mining Moratorium Act of 2003, which, if passed, would have imposed a moratorium on the implementation of data mining under the TIA program by the Department of Defense, as well as any similar program by the Department of Homeland Security. An amendment included in the Omnibus Appropriations Act for Fiscal Year 2003 (P. L. 108-7) required the Director of Central Intelligence, the Secretary of Defense, and the Attorney General to submit a joint report to Congress within 90 days providing details about the TIA program. 26 Funding for TIA as a whole was prohibited with the passage of the FY2004 Department of Defense Appropriations Act (P. L. 108-87) in September 2003.

However, Section 8131 of the law allowed unspecified subcomponents of the TIA initiative to be funded as part of DOD’s classified budget, subject to the provisions of the National Foreign Intelligence Program, which restricts the processing and analysis of information on U. S. citizens. 27 Computer-Assisted Passenger Prescreening System (CAPPS II) Similar to TIA, the CAPPS II project represented a direct response to the September 11, 2001, terrorist attacks. With the images of airliners flying into buildings fresh in people’s minds, air travel was now widely viewed not only as a critically vulnerable terrorist target, but also as a weapon for inflicting larger harm. The CAPPS II initiative was intended to replace the original CAPPS, currently being used. Spurred, in part, by the growing umber of airplane bombings, the existing CAPPS (originally called CAPS) was developed through a grant provided by the Federal Aviation Administration (FAA) to Northwest Airlines, with a prototype Shane Harris, “Counterterrorism Project Assailed By Lawmakers, Privacy Advocates,” Government Executive Magazine, 25 November 2002, [http://www. govexec. com/dailyfed/1102/112502h1. htm]. The original logo can be found at [http://www. thememoryhole. org/policestate/iaologo. htm]. The report is available at [http://www. eff. org/Privacy/TIA/TIA-report. pdf]. Some of the information required includes spending schedules, likely effectiveness of the program, likely impact on privacy and civil liberties, and any laws and regulations that may need to be changed to fully deploy TIA.

If the report had not submitted within 90 days, funding for the TIA program could have been discontinued. For more details regarding this amendment, see CRS Report RL31786, Total Information Awareness Programs: Funding, Composition, and Oversight Issues, by Amy Belasco. For further details regarding this provision, see CRS Report RL31805 Authorization and Appropriations for FY2004: Defense, by Amy Belasco and Stephen Daggett. 27 26 25 24 CRS-8 system tested in 1996. In 1997, other major carriers also began work on screening systems, and, by 1998, most of the U. S. -based airlines had voluntarily implemented CAPS, with the remaining few working toward implementation. 8 Also, during this time, the White House Commission on Aviation Safety and Security (sometimes referred to as the Gore Commission) released its final report in February 1997. 29 Included in the commission’s report was a recommendation that the United States implement automated passenger profiling for its airports. 30 On April 19, 1999, the FAA issued a notice of proposed rulemaking (NPRM) regarding the security of checked baggage on flights within the United States (docket no. FAA-1999-5536). 31 As part of this still-pending rule, domestic flights would be required to utilize “the FAA-approved computer-assisted passenger screening (CAPS) system to select passengers whose checked baggage must be subjected to additional security measures. 32 The current CAPPS system is a rule-based system that uses the information provided by the passenger when purchasing the ticket to determine if the passenger fits into one of two categories; “selectees” requiring additional security screening, and those who do not. CAPPS also compares the passenger name to those on a list of known or suspected terrorists. 33 CAPPS II was described by TSA as “an enhanced system to confirm the identities of passengers and to identify foreign terrorists or persons with terrorist connections before they can board U. S. aircraft. ”34 CAPPS II would have sent information provided by the passenger in the passengers name record (PNR), including full name, address, phone number, and date of birth, to commercial data providers for comparison to authenticate the identity of the passenger.

The commercial data provider would have then transmitted a numerical score back to TSA indicating a particular risk level. 35 Passengers with a “green” score would have undergone “normal screening,” while passengers with a “yellow” score would have undergone additional screening. Passengers with a “red” score would not have been allowed to board the flight, and would have received “the Department of Transportation, White House Commission on Aviation and Security: The DOT Status Report, February 1998, [http://www. dot. gov/affairs/whcoasas. htm]. The Gore Commission was established by Executive Order 13015 on August 22, 1996, following the crash of TWA flight 800 in July 1996.

White House Commission on Aviation Safety and Security: Final Report to President Clinton. 12 February 1997. [http://www. fas. org/irp/threat/212fin~1. html]. T h e d o c k e t c a n b e f o u n d o n l i n e [http://dms. dot. gov/search/document. cfm? documentid=57279&docketid=5536]. 32 33 3 1 30 29 28 a t Federal Register, 64 (April 19,1999): 19220. U. S. General Accounting Office, Aviation Security: Computer-Assisted Passenger Prescreening System Faces Significant Implementation Challenges, GAO Report GAO-04385, February 2004, pp. 5-6. Transportation Security Administration, “TSA’s CAPPS II Gives Equal Weight to Privacy, Security,” Press Release, 11 March 2003, [http://www. tsa. gov/public/display? theme=44&content=535].

Robert O’Harrow, Jr. , “Aviation ID System Stirs Doubt,” Washington Post, 14 March 2003, p. A16. 35 34 CRS-9 attention of law enforcement. ”36 While drawing on information from commercial databases, TSA had stated that it would not see the actual information used to calculate the scores, and that it would not retain the traveler’s information. TSA had planned to test the system at selected airports during spring 2004. 37 However, CAPPS II encountered a number of obstacles to implementation. One obstacle involved obtaining the required data to test the system. Several high-profile debacles resulting in class-action lawsuits have made the U. S. based airlines very wary of voluntarily providing passenger information. In early 2003, Delta Airlines was to begin testing CAPPS II using its customers’ passenger data at three airports across the country. However, Delta became the target of a vociferous boycott campaign, raising further concerns about CAPPS II generally. 38 In September 2003, it was revealed that JetBlue shared private passenger information in September 2002 with Torch Concepts, a defense contractor, which was testing a data mining application for the U. S. Army. The information shared reportedly included itineraries, names, addresses, and phone numbers for 1. 5 million passengers. 9 In January 2004, it was reported that Northwest Airlines provided personal information on millions of its passengers to the National Aeronautics and Space Administration (NASA) from October to December 2001 for an airline security-related data mining experiment. 40 In April 2004, it was revealed that American Airlines agreed to provide private passenger data on 1. 2 million of its customers to TSA in June 2002, although the information was sent instead to four companies competing to win a contract with TSA. 41 Further instances of data being provided for the purpose of testing CAPPS II were brought to light during a Senate Committee on Government Affairs confirmation hearing on June 23, 2004. In his answers to the committee, the acting director of TSA, David M.

Stone, stated that during 2002 and 2003 four airlines; Delta, Continental, America West, and Frontier, and two travel reservation companies; Galileo International and Sabre Holdings, provided passenger records to TSA and/or its contractors. 42 Transportation Security Administration, “TSA’s CAPPS II Gives Equal Weight to Privacy, Security,” Press Release, 11 March 2003, [http://www. tsa. gov/public/display? theme=44&content=535]. Sara Kehaulani Goo, “U. S. to Push Airlines for Passenger Records,” Washington Post, 12 January 2004, p. A1. 38 39 37 36 The Boycott Delta website is available at [http://www. boycottdelta. org]. Don Phillips, “JetBlue Apologizes for Use of Passenger Records,” The Washington Post, 20 September 2003, p.

E1; Sara Kehaulani Goo, “TSA Helped JetBlue Share Data, Report Says,” Washington Post, 21 February 2004, p. E1. Sara Kehaulani Goo, “Northwest Gave U. S. Data on Passengers,” Washington Post,18 January 2004, p. A1. Sara Kehaulani Goo, “American Airlines Revealed Passenger Data,” Washington Post, 10 April 2004, p. D12. 42 41 40 For the written responses to the committee’s questions, see [http://www. epic. org/privacy/airtravel/stone_answers. pdf]; Sara Kehaulani Goo, “Agency Got More Airline Records,”Washington Post, 24 June 2004, p. A16. CRS-10 Concerns about privacy protections had also dissuaded the European Union (EU) from providing any data to TSA to test CAPPS II.

However, in May 2004, the EU signed an agreement with the United States that would have allowed PNR data for flights originating from the EU to be used in testing CAPPS II, but only after TSA was authorized to use domestic data as well. As part of the agreement, the EU data was to be retained for only three-and-a-half years (unless it is part of a law enforcement action), only 34 of the 39 elements of the PNR were to be accessed by authorities,43 and there were to be yearly joint DHS-EU reviews of the implementation of the agreement. 44 Another obstacle was the perception of mission creep. CAPPS II was originally intended to just screen for high-risk passengers who may pose a threat to safe air travel.

However, in an August 1, 2003, Federal Register notice, TSA stated that CAPPS II could also be used to identify individuals with outstanding state or federal arrest warrants, as well as identify both foreign and domestic terrorists (not just foreign terrorists). The notice also states that CAPPS II could be “linked with the U. S. Visitor and Immigrant Status Indicator Technology (US-VISIT) program” to identify individuals who are in the country illegally (e. g. , individuals with expired visas, illegal aliens, etc. ). 45 In response to critics who cited these possible uses as examples of mission creep, TSA claimed that the suggested uses were consistent with the goals of improving aviation security. 6 Several other concerns had also been raised, including the length of time passenger information was to be retained, who would have access to the information, the accuracy of the commercial data being used to authenticate a passenger’s identity, the creation of procedures to allow passengers the opportunity to correct data errors in their records, and the ability of the system to detect attempts by individuals to use identity theft to board a plane undetected. In August 2004, TSA announced that the CAPPS II program was being canceled and would be replaced with a new system called Secure Flight. In the Department of Homeland Security Appropriations Act, 2005 (P. L. 108-334), Congress included a provision (Sec. 22) prohibiting the use of appropriated funds for “deployment or implementation, on other than a test basis,” of CAPPS II, Secure Flight, “or other follow on/successor programs,” until GAO has certified that such a system has met Some information, such as meal preferences, which could be used to infer religious affiliation, and health considerations will not be made available. Goo, Sara Kehaulani, “U. S. , EU Will Share Passenger Records,” Washington Post, 29 May 2004, p. A2. Department of Homeland Security, “Fact Sheet: US-EU Passenger Name Record Agreement Signed,” 28 May 2004, [http://www. dhs. gov/dhspublic/display? content=3651]. Federal Register. Vol. 68 No. 148 Friday August 1, 2003. P. 45266; U. S.

General Accounting Office, Aviation Security: Challenges Delay Implementation of ComputerAssisted Passenger Prescreening System, GAO Testimony GAO-04-504T, 17 March 2004, p. 17 U. S. General Accounting Office, Aviation Security: Challenges Delay Implementation of Computer-Assisted Passenger Prescreening System, GAO Testimony GAO-04-504T, 17 March 2004, p. 17 46 45 44 43 CRS-11 all eight of the privacy requirements enumerated in a February 2004 GAO report,47 can accommodate any unique air transportation needs as it relates to interstate transportation, and that “appropriate life-cycle cost estimates, and expenditure and program plans exist. ” GAO’s certification report is due to Congress no later than March 28, 2005. Data Mining Issues

As data mining initiatives continue to evolve, there are several issues Congress may decide to consider related to implementation and oversight. These issues include, but are not limited to, data quality, interoperability, mission creep, and privacy. As with other aspects of data mining, while technological capabilities are important, other factors also influence the success of a project’s outcome. Data Quality Data quality is a multifaceted issue that represents one of the biggest challenges for data mining. Data quality refers to the accuracy and completeness of the data. Data quality can also be affected by the structure and consistency of the data being analyzed.

The presence of duplicate records, the lack of data standards, the timeliness of updates, and human error can significantly impact the effectiveness of the more complex data mining techniques, which are sensitive to subtle differences that may exist in the data. To improve data quality, it is sometimes necessary to “clean” the data, which can involve the removal of duplicate records, normalizing the values used to represent information in the database (e. g. , ensuring that “no” is represented as a 0 throughout the database, and not sometimes as a 0, sometimes as a N, etc. ), accounting for missing data points, removing unneeded data fields, identifying anomalous data points (e. g. , an individual whose age is shown as 142 years), and standardizing data formats (e. g. , changing dates so they all include MM/DD/YYYY). Interoperability

Related to data quality, is the issue of interoperability of different databases and data mining software. Interoperability refers to the ability of a computer system and/or data to work with other systems or data using common standards or processes. Interoperability is a critical part of the larger efforts to improve interagency collaboration and information sharing through e-government and homeland security initiatives. For data mining, interoperability of databases and software is important to enable the search and analysis of multiple databases simultaneously, and to help ensure the compatibility of data mining activities of different agencies. Data mining projects that are trying to take advantage of existing legacy databases or that are 47

The eight issues included establishing an oversight board, ensuring the accuracy of the data used, conducting stress testing, instituting abuse prevention practices, preventing unauthorized access, establishing clear policies for the operation and use of the system, satisfying privacy concerns, and created a redress process. U. S. General Accounting Office, Aviation Security: Computer-Assisted Passenger Prescreening System Faces Significant Implementation Challenges, GAO Report GAO-04-385, February 2004. CRS-12 initiating first-time collaborative efforts with other agencies or levels of government (e. g. , police departments in different states) may experience interoperability problems. Similarly, as agencies move forward with the creation of new databases and information sharing efforts, they will need to address interoperability issues during their planning stages to better ensure the effectiveness of their data mining projects. Mission Creep

Mission creep is one of the leading risks of data mining cited by civil libertarians, and represents how control over one’s information can be a tenuous proposition. Mission creep refers to the use of data for purposes other than that for which the data was originally collected. This can occur regardless of whether the data was provided voluntarily by the individual or was collected through other means. Efforts to fight terrorism can, at times, take on an acute sense of urgency. This urgency can create pressure on both data holders and officials who access the data. To leave an available resource unused may appear to some as being negligent. Data holders may feel obligated to make any information available that could be used to prevent a future attack or track a known terrorist.

Similarly, government officials responsible for ensuring the safety of others may be pressured to use and/or combine existing databases to identify potential threats. Unlike physical searches, or the detention of individuals, accessing information for purposes other than originally intended may appear to be a victimless or harmless exercise. However, such information use can lead to unintended outcomes and produce misleading results. One of the primary reasons for misleading results is inaccurate data. All data collection efforts suffer accuracy concerns to some degree. Ensuring the accuracy of information can require costly protocols that may not be cost effective if the data is not of inherently high economic value.

In well-managed data mining projects, the original data collecting organization is likely to be aware of the data’s limitations and account for these limitations accordingly. However, such awareness may not be communicated or heeded when data is used for other purposes. For example, the accuracy of information collected through a shopper’s club card may suffer for a variety of reasons, including the lack of identity authentication when a card is issued, cashiers using their own cards for customers who do not have one, and/or customers who use multiple cards. 48 For the purposes of marketing to consumers, the impact of these inaccuracies is negligible to the individual.

If a government agency were to use that information to target individuals based on food purchases associated with particular religious observances though, an outcome based on inaccurate information could be, at the least, a waste of resources by the government agency, and an unpleasant experience for the misidentified individual. As the March 2004 TAPAC report observes, the potential wide reuse of data suggests that concerns about mission creep can extend beyond privacy to the protection of civil rights in the event that information is used for “targeting an individual solely on the basis of religion or Technology and Privacy Advisory Committee, Department of Defense.

Safeguarding Privacy in the Fight Against Terrorism, March 2004, p. 40. 48 CRS-13 expression, or using information in a way that would violate the constitutional guarantee against self-incrimination. ”49 Privacy As additional information sharing and data mining initiatives have been announced, increased attention has focused on the implications for privacy. Concerns about privacy focus both on actual projects proposed, as well as concerns about the potential for data mining applications to be expanded beyond their original purposes (mission creep). For example, some experts suggest that anti-terrorism data mining applications might also be useful for combating other types of crime as well. 0 So far there has been little consensus about how data mining should be carried out, with several competing points of view being debated. Some observers contend that tradeoffs may need to be made regarding privacy to ensure security. Other observers suggest that existing laws and regulations regarding privacy protections are adequate, and that these initiatives do not pose any threats to privacy. Still other observers argue that not enough is known about how data mining projects will be carried out, and that greater oversight is needed. There is also some disagreement over how privacy concerns should be addressed. Some observers suggest that technical solutions are adequate.

In contrast, some privacy advocates argue in favor of creating clearer policies and exercising stronger oversight. As data mining efforts move forward, Congress may consider a variety of questions including, the degree to which government agencies should use and mix commercial data with government data, whether data sources are being used for purposes other than those for which they were originally designed, and the possible application of the Privacy Act to these initiatives. Legislation in the 108th Congress During the 108th Congress, a number of legislative proposals were introduced that would restrict data mining activities by some parts of the federal government, and/or increase the reporting requirements of such projects to Congress.

For example, on January 16, 2003, Senator Feingold introduced S. 188 the Data-Mining Moratorium Act of 2003, which would have imposed a moratorium on the implementation of data mining under the Total Information Awareness program (now referred to as the Terrorism Information Awareness project) by the Department of Defense, as well as any similar program by the Department of Homeland Security. S. 188 was referred to the Committee on the Judiciary. On January 23, 2003, Senator Wyden introduced S. Amdt. 59, an amendment to H. J. Res. 2, the Omnibus Appropriations Act for Fiscal Year 2003. As passed in its final form as part of the omnibus spending bill (P. L. 108-7) on February 13, 2003, 49 50 Ibid. , p. 39.

Drew Clark, “Privacy Experts Differ on Merits of Passenger-Screening Program,” Government Executive Magazine, November 21, 2003, [http://www. govexec. com/dailyfed/ 1103/112103td2. htm]. CRS-14 and signed by the President on February 20, 2003, the amendment requires the Director of Central Intelligence, the Secretary of Defense, and the Attorney General to submit a joint report to Congress within 90 days providing details about the TIA program. 51 Some of the information required includes spending schedules, likely effectiveness of the program, likely impact on privacy and civil liberties, and any laws and regulations that may need to be changed to fully deploy TIA.

If the report had not submitted within 90 days, funding for the TIA program could have been discontinued. 52 Funding for TIA was later discontinued in Section 8131 of the FY2004 Department of Defense Appropriations Act (P. L. 108-87), signed into law on September 30, 2003. 53 On March 13, 2003, Senator Wyden introduced an amendment to S. 165 the Air Cargo Security Act, requiring the Secretary of Homeland Security to submit a report to Congress within 90 days providing information about the impact of CAPPS II on privacy and civil liberties. The amendment was passed by the Committee on Commerce, Science, and Transportation, and the bill was forwarded for consideration by the full Senate (S. Rept. 108-38). In May 2003, S. 65 was passed by the Senate with the Wyden amendment included and was sent to the House where it was referred to the Committee on Transportation and Infrastructure. Funding restrictions on CAPPSII were included in section 519 of the FY2004 Department of Homeland Security Appropriations Act (P. L. 108-90), signed into law October 1, 2003. This provision included restrictions on the “deployment or implementation, on other than a test basis, of the Computer-Assisted Passenger Prescreening System (CAPPSII),” pending the completion of a GAO report regarding the efficacy, accuracy, and security of CAPPSII, as well as the existence of a system of an appeals process for individuals identified as a potential threat by the system. 4 In its report delivered to Congress in February 2004, GAO reported that “As of January 1, 2004, TSA has not fully addressed seven of the eight CAPPSII issues identified by the Congress as key areas of interest. ”55 The one issue GAO determined that TSA had addressed is the establishment of an internal oversight board. GAO 51 52 The report is available at [http://www. eff. org/Privacy/TIA/TIA-report. pdf]. For more details regarding this amendment, see CRS Report RL31786, Total Information Awareness Programs: Funding, Composition, and Oversight Issues, by Amy Belasco. For further details regarding this provision, see CRS Report RL31805 Authorization and Appropriations for FY2004: Defense, by Amy Belasco and Stephen Daggett. Section 519 of P. L. 08-90 specifically identifies eight issues that TSA must address before it can spend funds to deploy or implement CAPPSII on other than a test basis. These include 1. establishing a system of due process for passengers to correct erroneous information; 2. assess the accuracy of the databases being used; 3. stress test the system and demonstrate the efficiency and accuracy of the search tools; 4. establish and internal oversight board; 5. install operational safeguards to prevent abuse; 6. install security measures to protect against unauthorized access by hackers or other intruders; 7. establish policies for effective oversight of system use and operation; and 8. address any privacy concerns related to the system.

General Accounting Office, Aviation Security: Computer-Assisted Passenger Prescreening System Faces Significant Implementation Challenges, GAO-04-385, February 2004, p. 4. 55 54 53 CRS-15 attributed the incomplete progress on these issues partly to the “early stage of the system’s development. ”56 On March 25, 2003, the House Committee on Government Reform Subcommittee on Technology, Information Policy, Intergovernmental Relations, and the Census held a hearing on the current and future possibilities of data mining. The witnesses, drawn from federal and state government, industry, and academia, highlighted a number of perceived strengths and weaknesses of data mining, as well as the still-evolving nature of the technology and practices behind data mining. 7 While data mining was alternatively described by some witnesses as a process, and by other witnesses as a productivity tool, there appeared to be a general consensus that the challenges facing the future development and success of government data mining applications were related less to technological concerns than to other issues such as data integrity, security, and privacy. On May 6 and May 20, 2003 the Subcommittee also held hearings on the potential opportunities and challenges for using factual data analysis for national security purposes. On July 29, 2003 Senator Wyden introduced S. 1484 The Citizens’ Protection in Federal Databases Act, which was referred to the Committee on the Judiciary. Among its provisions, S. 484 would have required the Attorney General, the Secretary of Defense, the Secretary of Homeland Security, the Secretary of the Treasury, the Director of Central Intelligence, and the Director of the Federal Bureau of Investigation to submit to Congress a report containing information regarding the purposes, type of data, costs, contract durations, research methodologies, and other details before obligating or spending any funds on commercially available databases. S. 1484 would also have set restrictions on the conduct of searches or analysis of databases “based solely on a hypothetical scenario or hypothetical supposition of who may commit a crime or pose a threat to national security. ” On July 31, 2003 Senator Feingold introduced S. 1544 the Data-Mining Reporting Act of 2003, which was referred to the Committee on the Judiciary. Among its provisions, S. 544 would have required any department or agency engaged in data mining to submit a public report to Congress regarding these activities. These reports would have been required to include a variety of details about the data mining project, including a description of the technology and data to be used, an assessment of the expected efficacy of the data mining project, a privacy impact assessment, an analysis of the relevant laws and regulations that would govern the project, and a discussion of procedures for informing individuals their personal information will be used and allowing them to opt out, or an explanation of why such procedures are not in place. Also on July 31, 2003, Senator Murkowski introduced S. 552 the Protecting the Rights of Individuals Act, which was referred to the Committee on the Judiciary. 56 57 Ibid. Witnesses testifying at the hearing included Florida State Senator Paula Dockery, Dr. Jen Que Louie representing Nautilus Systems, Inc. , Mark Forman representing OMB, Gregory Kutz representing GAO, and Jeffrey Rosen, an Associate Professor at George Washington University Law School. CRS-16 Among its provisions, section 7 of S. 1552 would have imposed a moratorium on data mining by any federal department or agency “except pursuant to a law specifically authorizing such data-mining program or activity by such department or agency. It also would have required The head of each department or agency of the Federal Government that engages or plans to engage in any activities relating to the development or use of a datamining program or activity shall submit to Congress, and make available to the public, a report on such activities. On May 5, 2004, Representative McDermott introduced H. R. 4290 the DataMining Reporting Act of 2004, which was referred to the House Committee on Government Reform Subcommittee on Technology, Information Policy, Intergovernmental Relations, and the Census. H. R. 4290 would have required each department or agency of the Federal Government that is engaged in any activity or use or develop data-mining technology shall each submit a public report to Congress on all such activities of the department or agency under the jurisdiction of that official. A similar provision was included in H. R. 4591/S. 528 the Civil Liberties Restoration Act of 2004. S. 2528 was introduced by Senator Kennedy on June 16, 2004 and referred to the Committee on the Judiciary. H. R. 4591 was introduced by Representative Berman on June 16, 2004 and referred to the Committee on the Judiciary and the Permanent Select Committee on Intelligence. For Further Reading CRS Report RL32597, Information Sharing for Homeland Security: A Brief Overview, by Harold C. Relyea and Jeffrey W. Seifert. CRS Report RL31408, Internet Privacy: Overview and Pending Legislation, by Marcia S. Smith. CRS Report RL30671, Personal Privacy Protection: The Legislative Response, by Harold C. Relyea.

Archived. CRS Report RL31730, Privacy: Total Information Awareness Programs and Related Information Access, Collection, and Protection Laws, by Gina Marie Stevens. CRS Report RL31786, Total Information Awareness Programs: Funding, Composition, and Oversight Issues, by Amy Belasco. DARPA, Report to Congress Regarding the Terrorism Information Awareness Program, May 20, 2003, [http://www. eff. org/Privacy/TIA/TIA-report. pdf]. Department of Defense, Office of the Inspector General, Information Technology Management: Terrorism Information Awareness Program (D-2004-033), December 12, 2003, [http://www. dodig. osd. mil/audit/reports/FY04/04-033. pdf].

Genome Wide Analysis and Comparative Docking Studies of New Diaryl Furan Derivatives Against Human Cyclooxygenase-2, Lipoxygenase, Thromboxane Synthase and Prostacyclin Synthase Enzymes Involved in In?ammatory Pathway persuasive essay help: persuasive essay help

G Model JMG-5899; No of Pages 17 Journal of Molecular Graphics and Modelling xxx (2009) xxx–xxx Contents lists available at ScienceDirect Journal of Molecular Graphics and Modelling journal homepage: www. elsevier. com/locate/JMGM Genome wide analysis and comparative docking studies of new diaryl furan derivatives against human cyclooxygenase-2, lipoxygenase, thromboxane synthase and prostacyclin synthase enzymes involved in in? ammatory pathway P. Nataraj Sekhar a, L. Ananda Reddy a,*, Marc De Maeyer b, K. Praveen Kumar c, Y. S.

Srinivasulu c, M. S. L. Sunitha a, I. S. N. Sphoorthi d, G. Jayasree d, V. Srikanth e, A. Maruthi Rao f, V. S. Kothekar g, Inder Konka h, P. V. B. S. Narayana i, P. B. Kavi Kishor a a Department of Genetics, Osmania University, Hyderabad 500 007, India Laboratory of Biomolecular Modelling, Division Biochemistry, Molecular and Structural Biology, Department of Chemistry, Katholieke University, Leuven, Belgium c Srivenkateswara University, Tirupathi 517 501, India d Srinidhi Institute of Science and Technology, Hyderabad 500 007, India e St.

Peters Institute of Pharmaceutical Sciences, Warangal 506 001, India f Department of Botany, Telangana University, Nizamabad 503 002, India g Department of Botany, Dr. B. A. M. University, Aurangabad 431 004, India h G. Pullareddy College of Pharmacy, Mehadipatnam, Hyderabad, India i CARISM, SASTRA University, Thanjavur, India b A R T I C L E I N F O A B S T R A C T Article history: Received 20 April 2009 Received in revised form 19 August 2009 Accepted 20 August 2009 Available online xxx Keywords: COX-2

Thromboxane synthase Lipoxygenase Homology modelling Docking In an effort to develop potent anti-in? ammatory and antithrombotic drugs, a series of new 4-(2phenyltetrahydrofuran-3-yl) benzene sulfonamide analogs were designed and docked against homology models of human cyclooxygenase-2 (COX-2), lipoxygenase and thromboxane synthase enzymes built using MODELLER 7v7 software and re? ned by molecular dynamics for 2 ns in a solvated layer. Validation of these homology models by procheck, verify-3D and ERRAT programs revealed that these models are highly reliable.

Docking studies of 4-(2-phenyltetrahydrofuran-3-yl) benzene sulfonamide analogs designed by substituting different chemical groups on benzene rings replacing 1H pyrazole in celecoxib with ? ve membered thiophene, furan, 1H pyrrole, 1H imidazole, thiazole and 1,3-oxazole showed that diaryl furan molecules showed good binding af? nity towards mouse COX-2. Further, docking studies of diaryl furan derivatives are likely to have superior thromboxane synthase and COX-2 selectivity.

Docking studies against site directed mutagenesis of Arg120Ala, Ser530Ala, Ser530Met and Tyr355Phe enzymes displayed the effect of inhibition of COX-2. Drug likeliness and activity decay for these inhibitors showed that these molecules act as best drugs at very low concentrations. ? 2009 Elsevier Inc. All rights reserved. 1. Introduction Cyclooxygenase-2 (COX-2) is an important enzyme responsible for the formation of biological mediators called prostanoids [1] including prostaglandins (PGs), prostacyclins and thromboxanes [2].

PGs are ubiquitous fatty-acid derivatives that serve as autocrine/paracrine mediators involved in many different physiological processes. Non-steroidal antiin? amatory drugs and COX-2 inhibitors bind to COX-2 and provide relief from the symptoms of pain and in? ammation. COX converts arachidonic acid (AA, a v-6 essential fatty acid) to prostaglandin H2 (PGH2), the precursor of the series-2 prostanoids [3]. The resulting PGH2 acts as a substrate * Corresponding author. Tel. : +91 40 2768 2335; fax: +91 40 2709 5178. E-mail address: lakkireddy_anandareddy@rediffmail. com (L. A. Reddy). 093-3263/$ – see front matter ? 2009 Elsevier Inc. All rights reserved. doi:10. 1016/j. jmgm. 2009. 08. 010 for isomerase pathways that produce other PG, thromboxane, and prostacyclin isomers. The enzyme contains two active sites: a heme with peroxidase activity, responsible for the reduction of PGG2 to PGH2, and a cyclooxygenase active site, binds arachidonic acid, cyclizes and oxygenates it to form an unstable intermediate PGG2. This short-lived molecule diffuses from the COX active site to the peroxidase active site, where a hydroperoxyl moiety on PGG2 is reduced to a hydroxyl.

The resulting PGH2 acts as a substrate for isomerase pathways that produce other PG, thromboxane, and prostacyclin isomers. The enzyme exists in two forms, COX-1 and COX-2. COX-1 is constitutively expressed and COX-2 is inducible. Both these enzymes show 60% homology with the same catalytic site, except that isoleucine at position 523 in COX-1 is replaced with valine in COX-2 responsible for hydrophobic side pocket in the enzyme. Both the isoenzymes are homodimers with distinct domains for dimerization, mem- Please cite this article in press as: P. N. Sekhar, et al. Genome wide analysis and comparative docking studies of new diaryl furan derivatives against human cyclooxygenase-2, lipoxygenase, thromboxane synthase and prostacyclin synthase enzymes involved in in? ammatory pathway, J. Mol. Graph. Model. (2009), doi:10. 1016/j. jmgm. 2009. 08. 010 G Model JMG-5899; No of Pages 17 2 P. N. Sekhar et al. / Journal of Molecular Graphics and Modelling xxx (2009) xxx–xxx brane binding and catalysis. Recent studies indicate that COX-1 utilizes arginine120 in its active site to form an ionic bond with the carboxylate group of arachidonate.

Conversely, arginine120 appears to form a hydrogen bond with arachidonate in the COX2 active site, and this interaction contributes less to the binding energy than the ionic bond formation does in COX-1 [4]. This and other subtle differences in the COX-2 active site were exploited to produce COX-2 selective inhibitors. 1. 1. Lipoxygenases, thromboxane synthase and prostacyclin synthase Lipoxygenases possess regiospeci? city during interaction with substrates and on this basis were designated as arachidonate 5-, 8-, 12-, 15-lipoxygenases (5-LOX, 8-LOX, 12-LOX, and 15-LOX) [1,5–9].

The four distinct enzymes insert oxygen at carbon 5, 8, 12 or 15 of arachidonic acid. The primary products are 5S-, 8S-, 12S-, or 15S-hydroperoxyeicosatetraenoic acid (5-, 8-, 12-, or 15HPETE), which can be further reduced by glutathione peroxidase to the hydroxy forms (5-, 8-, 12-, 15-HETE), respectively [5–9]. The 5-LOX represents a dioxygenase that possesses two distinct enzymatic activities leading to the formation of LTA4. First, it catalyzes the incorporation of molecular oxygen into arachidonic acid (oxygenase activity), producing HPETE and subsequently forms the unstable epoxide LTA4 by LTA4 synthase activity [5,10].

This is followed by the insertion of molecular oxygen at position C5, converting LTA4 to either 5(S)-hydroxy-6-trans-8,11,14-ciseicosatetranoic acid (5-HETE) or leukotrienes. Thromboxane is a member of the family of lipids known as eicosanoids. It is produced in platelets by thromboxane-A synthase from the endoperoxides produced by the cyclooxygenase enzyme from arachidonic acid. Thromboxane synthase, a cytochrome P450 enzyme, catalyzes the conversion of the prostaglandin endoperoxide into thromboxane A2 (TXA2), a potent vasoconstrictor and inducer of platelet aggregation.

In concert with prostacyclin, TXA2 plays a pivotal role in the maintenance of homeostasis. TXA2 is a major oxygenated metabolite of arachidonic acid in the platelets [11]. TXA2, produced by activated platelets, has prothrombotic properties, stimulating activation of new platelets as well as increasing platelet aggregation. Platelet aggregation is achieved by mediating expression of the glycoprotein complex GPIIb/IIIa in the cell membrane of platelets. Circulating ? brinogen binds these receptors on adjacent platelets, further strengthening the clot.

Prostacyclin is a member of the family of lipid molecules known as eicosanoids. These are produced in endothelial cells from PGH2 by the action of the enzyme prostacyclin synthase. Although prostacyclin is considered as an independent mediator, it is called prostaglandin I2 (PGI2) in eicosanoid nomenclature, and is a member of the prostanoids (together with the prostaglandins and thromboxane). PGI2 is derived from the v-6 arachidonic acid. The series-3 prostaglandin PGH3 also follows the prostacyclin synthase pathway, yielding another prostacyclin, PGI3 [12]. PGI3 is derived from the v-3 EPA.

Prostacyclin acts as a vasodialator and prevent platelet formation and clumping involved in blood clotting. In the present study we address the mode of interaction of diaryl furan derivatives with the cyclooxygenase-2, thromboxane synthase, lipoxygenase and prostacyclin synthase active sites making use of docking to predict their binding af? nities. 2. Materials and methods 2. 1. Phylogenetic analysis Reference protein of COX-2 from human (P35354) of wellestablished molecular function, was chosen as query sequence to search against human GenBank database, High Throughput

Genomic Sequences (HTG) and Non-Redundant (NR) using TBLASTN tool of National Center for Biotechnology Information (NCBI) [13]. A cutoff E-value (e-10) was previously set as selection criteria of BLAST hits for genomic sequences. The genomic sequences found were used to predict putative genes contained within them. The genes were predicted using GeneScan [14], GenomeScan [14], FGENESH [15], GeneMark (http://opal. biology. gatech. edu/GeneMark/eukhmm. cgi) and GrailEXP [16]. The sequences that have similar expression were found by BLAST searches against EST and NR databases of GenBank, using the genomic sequence as query.

Each new predicted CDS served as a query sequence for new BLAST searches, leading to the identi? cation of the largest possible number of related sequences. The predicted CDS were translated into amino acids and compared to the reference sequence. The protein sequences were aligned using ClustalW 1. 8 [17]. Further, the multiple alignments were edited with the help of GENEDOC (Free Software Foundation, Inc. ). Proteins with greater than 30% identity to that of the reference protein were regarded as functionally similar (homologous) to the reference protein and received the same name [18–21].

Those sequences that did not conform to this criterion were discarded. In each family, similar sequences were removed and the sequences were subjected to PROSITE and Pfam databases to see the presence of signature sequences for the corresponding families. The selected proteins were evaluated for the presence of transmembrane domains, using the HMMTOP algorithm [22]. Protein alignments obtained with ClustalW 1. 8 [17] were used as starting points for phylogenetic analysis, based on the Parsimony method using TREEVIEW software [23]. In all cases, 1000 bootstrap replications tested the tree topology obtained. . 2. Homology modelling The sequence of human lipoxygenase (1–701 amino acids), thromboxane synthase (31–533 amino acids), and cyclooxygenase-2 (1–604 amino acids) enzymes (accession numbers: AAC79680. 1, P24557, P35354) were obtained from NCBI and SWISSPROT. The 3D models of human lipoxygenase, thromboxane synthase, and cyclooxygenase-2 were built by using MODELLER 7v7 software on windows operating system [24]. Twenty models were generated for each of the protein structure used in this study. Template structures from the related family were predicted using BLAST server [13] against Protein Data Bank (PDB).

Reference structures were chosen based on the sequence alignment that showed maximum identity with high score and less e-value to build 3D models for lipoxygenase, thromboxane synthase, and COX-2. The coordinates for the structurally conserved regions (SCRs) for query sequences were assigned from the template using pairwise sequence alignment, based on the Needleman–Wunsch algorithm [25,17]. 2. 3. Molecular dynamics studies The structure with least modeller objective function obtained from the MODELLER was improved by molecular dynamics and equilibration methods using NAMD 2. software [26] and CHARMM22 force ? eld for lipids and proteins [27–29] along with the TIP3P model for water [30]. The simulations began with a 10,000-step minimization of the designed side chains and solvent ? to remove any bad contacts. A cutoff of 12 A (switching function ? starting at 10 A) for van der Waals interactions was assumed. An integration time step of 2 fs was used, permitting a multiple timestepping algorithm [31,32] to be employed, in which interactions involving covalent bonds were computed every time step.

Shortrange non-bonded interactions were computed every two-time step, and long-range electrostatic forces were computed every Please cite this article in press as: P. N. Sekhar, et al. , Genome wide analysis and comparative docking studies of new diaryl furan derivatives against human cyclooxygenase-2, lipoxygenase, thromboxane synthase and prostacyclin synthase enzymes involved in in? ammatory pathway, J. Mol. Graph. Model. (2009), doi:10. 1016/j. jmgm. 2009. 08. 010 G Model JMG-5899; No of Pages 17 P. N. Sekhar et al. / Journal of Molecular Graphics and Modelling xxx (2009) xxx–xxx 3 our-time steps. The pair list of the non-bonded interaction was ? recalculated every 10 time steps with a pair list distance of 13. 5 A. The short-range non-bonded interactions were de? ned as van der Waals and electrostatics interactions between particles within ? 12 A. A smoothing function was employed for the van der Waals ? interactions at a distance of 10 A. The backbone atoms were harmonically constrained with a restraining constant of 10. 0 kcal/ ? mol A2, and the systems were heated to 300 K over the course of 6 ps at constant volume.

The simulations were equilibrated for 2 ns with NPT ensemble (1 atm, 300 K) while the harmonic constraints were gradually turned off. With no harmonic constraints, the simulations ran for 2 ns in the NPT ensemble using Langevin dynamics at a temperature of 300 K with a damping coef? cient of g = 5 psA1 [33]. Pressure was maintained at 1 atm using the Langevin piston method with a piston period of 100 fs, a damping time constant of 50 fs, and a piston temperature of 300 K. Non? bonded interactions were smoothly switched off from 10 to 12 A. . Covalent The list of non-bonded interactions was truncated at 14 A bonds involving hydrogen were held rigid using the SHAKE algorithm, allowing a 2 fs time step. No periodic boundary conditions were included for the above studies. Atomic coordinates were saved every 1 ps for the trajectory analysis during the last 2 ns of MD simulation. CHARMM22 force ? eld parameters were used in all simulations in this study. Finally, the graph was drawn by taking Root Mean Square Deviation (RMSD) on X-axis with time (ns) on Y-axis.

Structure with least RMSD of Ca trace in the trajectory generated was used for further studies. 2. 4. Validation of 3D models and active site identi? cation The ? nal structure obtained was analyzed by Ramachandran’s map using PROCHECK (Programs to Check the Stereo Chemical Quality of Protein Structures) [34], environment pro? le using verify-3D (Structure Evaluation Server) [35] and ERRAT graphs [36]. ERRAT assesses the distribution of different types of atoms with respect to one another in the protein model. The residue packing and atomic contact analysis was performed by Whatif program [37] to identify ad packing of side chain atoms or unusual residue contacts. The software WHATCHECK [38] was used to obtain the Z-score of Ramachandran’s plot. These models were used for the identi? cation of active site and for docking of the inhibitors with the enzymes. The active site was predicted using an alpha shape algorithm to determine potential active sites in 3D protein structures in MOE site ? nder [39,40]. Binding sites were ? de? ned by atoms within 5. 0 A of the ligand or alpha spheres and trimmed at the edges to de? ne a contiguous binding site surface of ? 300 A2 of surface area. 2. 5. Docking studies and bioactivity The inhibitors 4-(2-phenyl-3-thienyl) benzene sulfonamide, 4(2-phenyltetrahydrofuran-3-yl) benzene sulfonamide, 4-(2-phenyl-1H-pyrrol-3-yl) benzene sulfonamide, 4-(5-phenyl-1H-imidazol-4-yl) benzene sulfonamide, 4-(5-phenyl-1,3-oxazol-4-yl) benzene sulfonamide, 4-(5-phenyl-1,3-thiazol-4-yl) benzene sulfonamide analogs including all hydrogen atoms, were built and optimized with CHEMSKETCH software suite. The top 30 compounds selected based on structural diversity and high af? ity scores predicted using OPENEYE software suite were used for further docking against the homology models of human COX-2, lipoxygenase, thromboxane synthase and crystal structures of mouse COX-2 and human prostacyclin synthase. Extremely Fast Rigid Exhaustive Docking (FRED) version 2. 1 was used for docking studies (Open Eye Scienti? c Software, Santa Fe, NM). FRED docking roughly consists of 2 steps: shape ? tting and optimization. During ? shape ? tting, the ligand was placed into a 0. 5-A resolution grid box encompassing all active site atoms (including hydrogen’s) using a smooth Gaussian potential [41].

A series of three optimization ? lters were then processed, which consists of (1) re? ning the position of hydroxyl hydrogen atoms of the ligand, (2) rigid body optimization, and (3) optimization of the ligand pose in the dihedral angle space. In the optimization step, 4 scoring functions are available: Gaussian shape scoring [41], chemscore [42] PLP [43] and screenscore [44]. The binding pocket was de? ned using the ligand-free protein structure and a box enclosing the binding site. ? This box was de? ned by extending the size of a ligand by 4 A (add box parameter of FRED).

One unique pose for each of the bestscored compounds was saved for the subsequent steps. The compounds used for docking was converted in 3D with OMEGA which has previously been shown to select a conformation similar to that of the X-ray input when using appropriate parameters [45] (a low-energy cutoff to discard high energy conformations, a low RMSD value below which two conformations are considered to be similar, and a maximum of 500–1000 output conformations) (Open Eye Scienti? c Software, Santa Fe, NM). The bioavailability of compounds was assessed using adsorption, distribution, metabolism, elimination (ADME) prediction methods.

Compounds were also tested to the Lipinski’s rule of ? ve using molinspiration [46]. Brie? y, this rule is based on the observation that most orally administered drugs have a molecular weight of 500 or less, a log P no higher than ? ve, 5 or fewer hydrogen bond donor sites and 10 or fewer hydrogen bond acceptor sites (N and O atoms). The polar surface areas (PSA) were also calculated since it is another key property linked to drug absorption, including intestinal absorption, bioavailability, Caco-2 permeability and blood-brain barrier penetration. Thus, passively absorbed molecules with a ?

PSA > 140 A2 are thought to have low oral bioavailability. Drug likeliness was also calculated using OSIRIS server, which is based on a list of about 5300 distinct substructure fragments created by 3300 traded drugs as well as 15,000 commercially available chemicals yielding a complete list of all available fragments with associated drug likeliness [47]. The drug score combines drug likeliness, c log P, log S, molecular weight and toxicity risks as a total value that may be used to judge the compound’s overall potential to qualify for a drug. 3. Results 3. 1.

Phylogenetic analysis Genome wide analysis of human COX sequences showed 11 characterized genes of COX-1 and 4 of COX-2. Percent identity for all the sequences was calculated in each family with the corresponding query sequence using GENEDOC (Free software foundation Inc. ) based on multiple sequence alignment (Fig. 1). Multiple sequence alignment of COX-1 and COX-2 shows that Arg120 is highly conserved. Our results show that these proteins contain 1 transmembrane domain. Phylogenetic analysis of COX sequences revealed that COX-1 and COX-2 are divergent, showing branches in tree view (Fig. ). It revealed two major families with ? ve subfamilies of COX-1 indicating different functions to each family (Table 1). 3. 2. Homology modelling and validation Reference proteins 1LOX, 1TQN and 3PGH have 26%, 32%, and 60% sequence identity with lipoxygenase (1–701 amino acids), thromboxane synthase (31–533 amino acids) and COX-2 (19–570 amino acids) from human. These reference proteins were used as templates for modelling human lipoxygenase, thromboxane synthase, and COX-2. Coordinates of structurally conserved regions (SCRs), structurally variable regions (SVRs), N-termini

Please cite this article in press as: P. N. Sekhar, et al. , Genome wide analysis and comparative docking studies of new diaryl furan derivatives against human cyclooxygenase-2, lipoxygenase, thromboxane synthase and prostacyclin synthase enzymes involved in in? ammatory pathway, J. Mol. Graph. Model. (2009), doi:10. 1016/j. jmgm. 2009. 08. 010 G Model JMG-5899; No of Pages 17 4 P. N. Sekhar et al. / Journal of Molecular Graphics and Modelling xxx (2009) xxx–xxx Fig. 1. Multiple sequence alignment of COX-1 and COX-2 predicted using clustalX software. Conserved residues are indicated with *.

Hum represents human. and C-termini from the templates were assigned to the target sequences based on the satisfaction of spatial restraints. All side chains of the model protein were set by rotamers. The models generated were re? ned by molecular dynamics and equilibration method using NAMD software and the trajectory graph was drawn between the RMSD of Ca trace on X-axis and time (ns) on Y-axis (Fig. 3). It was found from these ? gures that the RMSD was stable around 1. 5 ns and then increased and decreased at 2 ns in case of human COX-2 (Fig. 3A) and it was stable for thromboxane synthase (Fig. B) and lipoxygenase (Fig. 3C). Final stable structures of these three human proteins (prostacyclin synthase, thromboxane synthase and human COX-2) contain 21, 21 and 27 a-helices, 18, 10 and 17 b-sheets, respectively, as shown in Fig. 4. It appears from the Ramachandran’s plot that 75. 9%, 78. 8% and 84. 5% of the residues are located within the most favored, 20. 8%, 18. 7% and 13. 9% in additionally allowed, 1. 7%, 1. 1% and 0. 6% in generously allowed and 1. 7%, 1. 4% and 1. 1% in disallowed regions for lipoxygenase, thromboxane synthase, and COX-2, respectively. The RMSD for covalent bonds were A0. 7, A0. 62, A0. 52 and ? covalent angles were A12. 63, A1. 99, A1. 94 A relative to the standard dictionary of human lipoxygenase, thromboxane synthase and COX-2. Altogether 98. 3% 98. 6% and 98. 9% of the residues of human lipoxygenase, thromboxane synthase and COX2 were in favored and allowed regions. The PROCHECK G-factor of human lipoxygenase, thromboxane synthase and COX-2 was A5. 58, A1. 15 and A1. 07. The overall quality factors of 70. 425 for lipoxygenase, 86. 441 for thromboxane synthase and 93. 360 for COX-2 were observed with the use of ERRAT environment pro? le. Also, 91. 2% residues for COX-2 (Fig. 5A), 84. 76% for thromboxane synthase (Fig. 5B) and 89. 72% for lipoxygenase (Fig. 5C) had an average 3D–1D score greater than 2 when verify-3D was used indicating that the models built are highly reliable. Evaluation of the ? nal models of human lipoxygenase, thromboxane synthase and COX-2 with Whatif program predicted the RMS Z-scores of backbone–backbone contacts which were A3. 11, A2. 58 and A1. 52; backbone–side chain contacts A2. 01, A1. 87 and A0. 22; side chain– backbone contacts A4. 25, A3. 75 and A2. 62 and side chain–side chain contacts A1. 5, A0. 87 and A0. 99. Moreover, evaluation of the structural integrity of the ? nal models of the above three human proteins showed Z-scores of A3. 35, A2. 64 and A1. 56, which are closer to the normal value of 2. 0 except for lipoxygenase. This trend continued with the data obtained with the WHATCHECK program in which the Z-scores of bond lengths, bond angles, omega angle restraints, side chain planarity, improper dihedral distribution inside/outside distribution are 7. 611, 2. 915, 1. 895, 3. 005, 3. 825 and 1. 096 for the human lipoxygenase; 1. 633, 1. 986, 1. 785, 3. 283, 1. 66 and 1. 057 for thromboxane synthase; 1. 604, 2. 007, 1. 709, 3. 226, 1. 839 and 1. 101 for COX-2, respectively. These values are positive indicating better conformation of the protein (positive is better than average) and are similar to those of the crystallographic structures. 3. 3. Active site analysis Possible binding sites of COX-2, thromboxane synthase, and human lipoxygenase enzymes were searched using MOE software. The residues in lipoxygenase ‘Val8-Asp13’, ‘Leu15-Ser16’, ‘Phe41Asp44’, ‘Lys79-Trp82’, Arg95, Ile96, His98, ‘Ile188-Leu196’, Leu198, Leu205, Lys206, Ala397-Leu399’, ‘Glu401-Leu404’, ‘Ala406-Glu407’, Fig. 2. Phylogenetic analysis of COX-1 and COX-2 sequences predicted using TREEVIEW software suite. Please cite this article in press as: P. N. Sekhar, et al. , Genome wide analysis and comparative docking studies of new diaryl furan derivatives against human cyclooxygenase-2, lipoxygenase, thromboxane synthase and prostacyclin synthase enzymes involved in in? ammatory pathway, J. Mol. Graph. Model. (2009), doi:10. 1016/j. jmgm. 2009. 08. 010 G Model JMG-5899; No of Pages 17 P. N. Sekhar et al. / Journal of Molecular Graphics and Modelling xxx (2009) xxx–xxx 5

Table 1 BAC/PAC clone accession number, Genbank accession number, locus tag of the sequences, number of transmembrane segments, % identity with the query, gene name, full length cDNA sequences and EST accession numbers of both COX-1 and COX-2 genes predicted using various databases and software. Sequence name BAC/PAC clone accession number S78220 AY449688 S36271 DQ895652 DQ180742 AL162424 DQ180741 NM_000962 AL162424 AL162424 NM_080591 S36219 AJ634912 AY151286 AL033533 GenBank accession number AAB21215. 1 AAR08907. 1 AAB22217. 1 ABM86578. 1 ABA60099. 1 CAI14716. 1 ABA60098. 1 NP_000953. 2 CAI14715. 1 CAI14714. 1 NP_542158. 1 AAB22216. CAG25548. 1 AAN52932. 1 CAB41240. 1 Locus – tag Number of TM segments 1 1 1 1 1 1 1 1 1 1 1 % Identity with Gene query name 58% 58% 58% 58% PTGS1 55% 12% 55% 58% 58% 55% 55% 55% 25% 53% 100% PTGS2 Full length cDNA ESTs accession number HUMCOX1. 1 HUMCOX1. 2 HUMCOX1. 3 HUMCOX1. 4 HUMCOX1. 5 HUMCOX1. 6 HUMCOX1. 7 HUMCOX1. 8 HUMCOX1. 9 HUMCOX1. 10 HUMCOX1. 11 HUMCOX1. 12 HUMCOX2. 1 HUMCOX2. 2 HUMCOX2. 3 RP11-542K23. 6-003 RP11-542K23. 6-001 RP11-542K23. 6-002 RP5-973M2. 1-001 Em:AY151286. 1 Em:AY462100. 1 Em:BC013734. 1 Em:L15326. 1 Em: M64291. 1 Em: M90100. 1 Em: U97696. 1 HUMCOX2. 4 NM_000963 NP_000954. 1 1 100% Em:AL710848. 1 Em:BF939218. Em:BM129013. 1 Em:BQ002136. 1 Em:CA436148. 1 Em:CA445948. 1 Em:CB146285. 1 Em:CB960307. 1 Em:CD609928. 1 Em:CD609929. 1 Em:CD609930. 1 ‘Asn594-Ala597’, Met599, Arg600, ‘Ile604-Thr606’, Asn697 (Fig. 6A); in thromboxane synthase, ‘Phe59-Phe63’, Phe87, ‘Asn109-Phe117’, ‘Ser119-Val120’, ‘Leu125-Phe126’, Arg128, ‘Asp177-Arg180’, ‘Thr182-Cys183’, ‘Lys214-Arg223’, ‘Ile225-Leu226’, Leu229, Ile235, Leu239, ‘Lys246-Asn247’, Leu251, ‘Phe336-Ile337’, ‘Ile340-Ala341’, Tyr343, ‘Glu344-Ile345’, ‘Thr347-Asn348’, ‘Ser351-Phe352’, ‘Pro406Thr411’, Glu413, Val430, Phe472, ‘Arg477- Cys479’, Val482, Leu512, ‘Leu514-Gly521’, Lys523, Gly525 and Val526 (Fig. B) and in human COX-2, ‘His24-Gln30’, ‘Arg46-Phe49’, Asn53, Glu58, Leu60, Thr61, Lys64, Leu65, ‘Lys68-Asn72’, Val74, Leu78, Met99, ‘Tyr101Val102’, Ser105, Arg106, ‘His108-Leu109’, Leu138, Val335, Leu338, Tyr341, Leu345, Glu451, ‘Lys454-Pro460’, ‘Val509Glu510’, Ala513 and Leu517 (Fig. 6C) are highly conserved with the active site of templates. Residues 1–17 and 570–604 were removed from the model (human COX-2) because no homologous region occurred in 3PGH and these residues were not found near the active site. Therefore, the present model is made up of residues 18–569.

Fig. 3. Calculated RMSD graphs of molecular dynamics simulations of human cyclooxygenase-2 (A), thromboxane synthase (B) and lipoxygenase (C) using NAMD software. ? Time (ns) is taken on X-axis and RMSD (A) on Y-axis. Please cite this article in press as: P. N. Sekhar, et al. , Genome wide analysis and comparative docking studies of new diaryl furan derivatives against human cyclooxygenase-2, lipoxygenase, thromboxane synthase and prostacyclin synthase enzymes involved in in? ammatory pathway, J. Mol. Graph. Model. (2009), doi:10. 1016/j. jmgm. 2009. 08. 010 G Model

JMG-5899; No of Pages 17 6 P. N. Sekhar et al. / Journal of Molecular Graphics and Modelling xxx (2009) xxx–xxx Fig. 4. The created 3D structures of human cyclooxygenase-2 (A), thromboxane synthase (B) and lipoxygenase (C). The structure is obtained by energy minimization and equilibration over the last 1,00,000 runs with 2 ns of molecular dynamics simulation. The a-helices is represented in ribbons and b-sheets is represented in yellow arrows. The pictures were taken from pymol software. 3. 4. Superimposition and secondary structure prediction The RMSD of Ca trace of ? nal re? ed models of human COX-2 (Fig. 7A) thromboxane synthase (Fig. 7B) and lipoxygenase (Fig. 7C), and enzymes with the templates 1LOX, 1TQN and ? 3PGH was 1. 68, 1. 64 and 1. 51 A. The RMSD of Ca trace of models generated from the MODELLER with the templates was 0. 83, 0. 30 ? ? and 0. 17 A with a difference of 1. 38, 1. 34 and 1. 33 A between initial and ? nal re? ned models. These differences create conformational changes in the active site of enzymes. Secondary structures were also analyzed based on the superimposition of their 3D structures by SPDBV software suite (http://www. xpasy. org/spdbv). The secondary structures of human COX-2 with that of template, was highly conserved with 25 a-helices and 14 b-sheets with a difference of 2a-helices at a1 and a9 and 2 b-sheets at b6 and b16 (Fig. 8A). Also it was found that a2, a3, a4, a5, a7, a8, a13, a14, a19 a27, b13, and b14 are longer than the template. The secondary structures of human thromboxane synthase with that of template, was less conserved with 21 a-helices and 10 b-sheets with a difference of 3 a-helices at a4, a11, and a13 and 2 b-sheets at b8 and b9 (Fig. 8B).

It was found that a5, a15, a17, a19, a20, and a21 are longer than the template. The secondary structures of human lipoxygenase with that of template were less conserved with 21 a-helices and 18 b-sheets with a difference of 2 a-helices at a2 and a3, and 2 b-sheets at b4, b9, and b18 (Fig. 8C). It was also found that a1, a4, a12, a14, a20, and a23 are longer than the template. In spite of several amino acid differences between the primary sequences of the model built with the template, their secondary structures are identical indicating that these models are reliable for docking. . 5. Docking studies of new diaryl furan derivatives Calculated docking scores for the celecoxib derivatives designed by substituting different chemical groups on one of the benzene rings replacing 1H pyrazole group in celecoxib with different ? ve member rings like thiophene, furan, 1H pyrrole, 1H imidazole, thiazole and 1,3-oxazole showed that most of the diaryl furan molecules showed good binding af? nity towards mouse COX-2 (Table 2). The top 30 molecules showed high af? nity scores with the crystal structure of COX-2 from mouse (PDB: 3PGH).

Total docking scores of top 30 docked conformations of newly designed molecules with different side chains are shown in Table 3. It appears from this table that these inhibitors exhibit greater selectivity to thromboxane synthase and COX-2 compared to lipoxygenase and prostacyclin synthase. Molecules 2–5 are likely to have better thromboxane synthase selectivity with A1045. 42, A1061. 653 and A1077. 886 scores compared to other enzymes. Molecule 5, which interacts with high af? nity with thromboxane synthase, shows hydrogen-bonding interaction with main chain

Please cite this article in press as: P. N. Sekhar, et al. , Genome wide analysis and comparative docking studies of new diaryl furan derivatives against human cyclooxygenase-2, lipoxygenase, thromboxane synthase and prostacyclin synthase enzymes involved in in? ammatory pathway, J. Mol. Graph. Model. (2009), doi:10. 1016/j. jmgm. 2009. 08. 010 G Model JMG-5899; No of Pages 17 P. N. Sekhar et al. / Journal of Molecular Graphics and Modelling xxx (2009) xxx–xxx 7 Fig. 5. The 3D pro? les of human cyclooxygenase-2 (A), and thromboxane synthase (B) and lipoxygenase (C) models that were veri? d using verify-3D server. Overall compatibility score above zero indicates residues are reasonably folded. oxygen of Met81. Molecules 1 and 5 interact with Arg120 and Phe518 of mouse COX-2 with two hydrogen-bonding interactions, where the sulfonamide group orients towards the Arg120 at the base of the active site, and the carboxylate group interacts with Phe518. In human COX-2, carboxylate group of molecule 2 interacts with the side chain atoms of Arg106 and Try341, a key catalytic residue just below the heme similar to Arg120 and Tyr355 as found in the crystal structure of mouse COX-2.

This suggests that a compound with a CH2F group orients in the vicinity of Tyr341, such as molecule 5 and showed stronger inhibitory activity on thromboxane synthase than the compounds without CH2F group. Molecule 6 interacts with two hydrogen-bonding interactions with main chain oxygen atom of Tyr355 and Oe atom of Glu524, which in turn interact with Arg120 in the closed form exclusively. In the open conformation, Glu524 bridges between Arg120 and Arg513. Molecules 7–12 with CF3 in the R2 show better selectivity to thromboxane synthase than other enzymes.

Inhibitor 7 that has a pyridyl ring in the R1 position allows this part of the inhibitor to penetrate deep into hydrophobic pocket and shows greater binding af? nity with thromboxane synthase compared to other enzymes by interacting with main chain oxygen of Arg379. Molecule 9 with 4-NO2 in the R1 position shows better binding af? nity with human thromboxane synthase (A1126. 585), compared to crystal structure of mouse COX-2 (A632. 4), lipoxygenase (A322. 23) and prostacyclin synthase (A439. 44). This may provide additional hydrophobic and hydrophilic interactions.

The nitro groups are close to the charged residue Arg120 lying at the entrance of the COX-2 active site. The phenyl group ? lls the top of the channel and is stabilized by p–p interactions with Trp387 and Phe518, respectively. CH–p interaction is also observed between Tyr355 and the phenyl rings. Molecule 9, the best of the series of inhibitors with a high af? nity of A1126. 585 with thromboxane synthase shows two hydrogen-bonding interactions with main chain oxygen of Met81 and Ser83, compared to COX-2 from mouse and human lipoxygenase and prostacyclin synthase. Molecules 16 and 18 show almost equal af? ity with thromboxane synthase, human and mouse COX-2. With CHF2 in the R2 position, the binding af? nity varies from 18 to 21 and shows high af? nity (A866. 857) against thromboxane synthase compared to M19 (A850. 624), M20 (A866. 857), M21 (A883. 09) but least activity with prostacyclin synthase (Table 3). Hydrogen-bonding interaction shows that 21, 25, 27 and 29 bind with NH2 group of Arg106 in COX-2, oxygen of Met81, Og of Ser83, Oe1 of Glu188, NZ of Lys216, NH of Arg80 and main chain oxygen of Met81 in thromboxane synthase enzyme. Molecule 29 interacts with ? uoride and CO2Me in the R1 and R2 positions and shows high binding af? ity with thromboxane synthase by binding with NH atom of Arg80 and main chain oxygen of Met81 with A1012. 954 compared to other enzymes. It is interesting to note that molecule 6 with OMe group (in place of CF3) seems to retain thromboxane synthase and COX-2 preference over lipoxygenase and prostacyclin synthase. These studies show that binding scores of all the compounds are better for thromboxane synthase and COX-2 than lipoxygenase and prostacyclin synthase. Therefore, these molecules elevate the levels of prostacyclin synthase that inhibits thrombosis for the people undergoing myocardial infraction.

Site directed mutagenesis of Arg120Ala, Ser530Ala, Ser530Met and Tyr355Phe of mouse COX-2 abolishes the activity of molecules 4–6 and reduces the activity of other molecules compared to wild type. This indicates that these are the important determinant residues for the activity of COX-2 (Table 4). To evaluate the results of docking accuracy, known inhibitors (pyroxicam, nimesulide, naproxen, mefenamic acid, meclofenamic acid, ketoralac, indomethacin, ibuprofen and diclofenac) were docked against wild and mutant types of mouse COX-2 (PDB: 3PGH). Pyroxicam shows two hydrogen-bonding interactions with NH1 and OH groups of Arg120 and Tyr355 (Fig. A), nimesulide shows ? ve hydrophobic interactions with Cg1, Cg, Cd1, Ce2 and Cg2 atoms of Val349, Leu352, Leu359, Phe518 and Val523 (Fig. 9B), naproxen shows two hydrogenbonding with NH1 and OH groups of Arg120 and Tyr355 and ? ve hydrophobic interactions with Cg1, Cd2, Cd1, Ce2 and Cg1 atoms of Val349, Leu352, Leu359, Phe518 and Val523 (Fig. 9C), mefanamic acid shows two hydrogen-bonding interaction with OH and Og atoms of Tyr385 and Ser530 and eight hydrophobic interactions with Cg1, Cd2, Ce1, Cd1, CZ2, Cg, Cg1 and Cg atoms of Val349, Leu352, Phe381, Leu384, Trp387, Met522, Val523 and Leu531 (Fig. D), meclofenamic acid shows single hydrogenbonding with Og atom of Ser119 and seven hydrophobic interactions with Cg1, Cd1, CH2, Cg2, Cb, Cd2 and Cd2 atoms of Val89, Leu39, Trp100, Ile112, val116, Phe357 and Leu359 (Fig. 9E), ketoralac shows seven hydrophobic interactions with Cg1, Cd2, Ce1, Cd1, CZ2, Cg atoms of Val349, Leu352, Phe381, Leu384, Trp387, Met522 and Leu531 (Fig. 9F), indomethacin shows Cd1, Cg2, Cg, Cd2, Cd2, Cg2 and Cg atoms of Leu93, Val116, Val349, Leu352, Phe357, Leu359 (Fig. G), Val523 and Leu531, ibuprofen shows two hydrogen-bonding interactions with NH1 and OH groups of Arg120 and Tyr355 and eight hydrophobic interactions with Cg2, Cg1, Cd2, Cd1, CZ2, Cg, Cg2 and Cd1 atoms of Val116, Val349, Leu352, Leu359, Trp387, Met522, Val523 and Leu531 (Fig. 9H) and diclofenac shows one hydrogen-bonding interaction with Tyr385 and eight hydrophobic interactions with Cg2, Cg, Ce1, Cd1, CZ2, Ce2, Cg and Cg1 atoms of Val349, Leu352, Phe381, Leu384, Trp387, Phe518, Met522 and Val523 residues, respectively (Fig. 8I).

Total docking scores calculated are shown in Table 5 and it is clear from the table that the aryl carboxylic acid, meclofenamic acid are equipotent against wild type enzyme (A585. 39) and Ser530Ala (A585. 39) but exhibit reduced inhibition of the Tyr355Phe with a docking score of A569. 3. Inhibition by indomethacin was very sensitive to mutation of Arg120Ala, Tyr355Phe and Ser530Ala with the docking scores of A712. 18, A748. 95 and A717. 75. Studies on mutagenesis and with the crystal structure of COX-2 indomethacin complex shows that, carboxylic acid of indomethacin interacts with Arg120 and Tyr355 with the

Please cite this article in press as: P. N. Sekhar, et al. , Genome wide analysis and comparative docking studies of new diaryl furan derivatives against human cyclooxygenase-2, lipoxygenase, thromboxane synthase and prostacyclin synthase enzymes involved in in? ammatory pathway, J. Mol. Graph. Model. (2009), doi:10. 1016/j. jmgm. 2009. 08. 010 G Model JMG-5899; No of Pages 17 8 P. N. Sekhar et al. / Journal of Molecular Graphics and Modelling xxx (2009) xxx–xxx Fig. 6. Active site of lipoxygenase (A), thromboxane synthase (B) and cyclooxygenase-2 (C) predicted using MOE software. -helices are represented in red color, b-sheets is represented in green and lopps are represented in magenta. Active site area is represented in spheres. ? bond distances of 2. 4 and 3. 0 A, respectively [48–51]. Ser530Ala enzymes are completely resistant to inhibition by diclofenac (A573. 22) and show no signi? cant effect on Arg120Ala and Tyr355Phe based on the docking scores for wild (A535. 97) and mutant types (A561. 04, and A577. 97). However, arylcarboxylic acid, ketorolac did not inhibit Arg120Ala or Ser530Ala with the docking scores of A565. 39 and A577. 5 but showed inhibitory activity against Tyr355Phe with docking scores of A592. 71. Pyroxicam contains no carboxylic acid, but showed inhibitory activity against Tyr355Phe with docking score of A744. 96 compared to wild type that displayed A735. 64. Nimesulide showed no impact on Arg120Ala, Tyr355Phe and Ser530Ala. Our studies also show that nimesulide has weaker inhibition activity against Ser530Ala. Thus, mutation of Ser530Ala suggests that it binds to COX-2 to maximize interaction with the Ser530 hydroxyl group. Naproxen shows impact on the inhibition of Tyr355Phe enzyme with docking score of A551. 3 compared to wild type score of A617. 8 (Table 5). Mefenamic acid exhibits no impact on Arg120Ala, Tyr355Phe and Ser530Ala. Ibuprofen displays reduced activity against Tyr355Phe with the docking scores of A472. 67 but no activity on Arg120Ala and Ser530Ala with docking scores of A488. 73 and A483. 87. A regression analysis of docking scores and logIC50 for NSAIDS was carried out and the scatter plots are drawn for wild and mutant types as shown in Fig. 10. It was found that the r2 values for wild type (Fig. 10A), Arg120ala (Fig. 10B), Tyr355Phe (Fig. 10C) and Ser530Ala (Fig. 10D) mutants are 0. 81, 0. 96, 0. 63 and 0. 9, respectively. These graphs show that docking scores correlate well with the results obtained from experimental data [52]. Hence, docking results of these inhibitors perhaps can be used as a new pharmacophore for lead generation and optimization of novel antithrombotic and anti-in? ammatory agents. On the basis of docking results and bioavailability scores (Table 6), compound 4 is predicted to be the best antithrombotic and anti-in? ammatory. 4. Discussion To design ef? cient inhibitors against the crystal structure of COX-2, phylogenetic analysis of all the COX-2 sequences was Please cite this article in press as: P.

N. Sekhar, et al. , Genome wide analysis and comparative docking studies of new diaryl furan derivatives against human cyclooxygenase-2, lipoxygenase, thromboxane synthase and prostacyclin synthase enzymes involved in in? ammatory pathway, J. Mol. Graph. Model. (2009), doi:10. 1016/j. jmgm. 2009. 08. 010 G Model JMG-5899; No of Pages 17 P. N. Sekhar et al. / Journal of Molecular Graphics and Modelling xxx (2009) xxx–xxx 9 Fig. 7. Superimposition of Ca trace of cyclooxygenase-2 (A), thromboxane synthase (B) human lipoxygenase (C) (represented in red) and the templates 1PXX, 1TQN and 1LOX (represented in blue color). arried out using the entire human genome. Phylogenetic analysis revealed 11 COX-1 and 4 COX-2 sequences. It appeared that both COX-1 and COX-2 sequences are highly conserved in their secondary structures and in the active site of the protein except for the residue Val523 in COX-2 by Ile in COX-1 [46]. Identity of both COX-1 and COX-2 sequences were above 55% but HUMCOX1. 6 and HUMCOX2. 3 showed less than 30% identity with reference sequence. Phylogenetic tree also revealed that the COX-1 sequences form ? ve sub-families. HUMCOX1. 1–1. 4 are closely related to one another and form one sub-family, HUMCOX1. –1. 9 form into another but HUMCOX2. 0–2. 2 forms separate branches in the phylogenetic tree. Also it was noticed that only HUMCOX2. 3 in COX-2 contains ESTs and cDNA libraries. Based on these ? ndings, it appears that both COX-1 and COX-2 sequences are related to one another in the evolution with high percentage of identity and similarity. A homology model for human COX-2, thromboxane synthase and lipoxygenase was derived based on the pairwise sequence alignment using 3PGH, 1TQN and 1LOX as the templates. Interestingly, the average pairwise RMS ? of the C-a coordinates of these homologs was very less, indicating a strong structural conservation. Re? nement of the homology model resulted in a structure with a least RMSD compared to the unre? ned structure. The Ramachandran plot showed that more than 98% of residues were occupying the allowed region and no key or important residues are seen in the disallowed regions. Verify-3D analysis suggested that b-3 in human COX-2, loop between a7 and a8 in thromboxane synthase and a1 of lipoxygenase were slightly misfolded and does not appear in the active sites of any of the 3D structures.

In summary, the above-mentioned analyses indicate that the model structures are consistent with current understanding of protein structure. Docking studies with 4-(2-phenyl-3thienyl) benzene sulfonamide, 4-(2-phenyltetrahydrofuran-3-yl) benzene sulfonamide, 4-(2-phenyl-1H-pyrrol-3-yl) benzene sulfonamide, 4-(5-phenyl-1H-imidazol-4-yl) benzene sulfonamide, 4-(5-phenyl-1,3-oxazol-4-yl) benzene sulfonamide, 4-(5-phenyl1,3-thiazol-4-yl) benzene sulfonamide analogs against mouse crystal structure show that diaryl furan derivatives bind effectively than the other molecules used in the study.

Docking studies with 4(2-phenyltetrahydrofuran-3-yl) benzene sulfonamide analogs against human lipoxygenase, thromboxane synthase and COX-2 revealed that they bind effectively with stronger hydrogenbonding interactions. It appears that His90, Arg120, Gln192, Leu352, Tyr355, Tyr385, Phe470, Phe518, Glu524, Gly526, Ser530 of mouse COX-2, Arg106 and Tyr341 of human COX-2 are the main residues involved in hydrogen-bonding interactions.

These studies also reveal that when central ring oxygen interacts with Arg120, sulfonamide moves into the side pocket of the active site and is involved in hydrogen bonds with polar residues such as His90 and Arg513 and also hydrogen bonded to Glu524. These interactions are essential for COX-2 inhibitory activity as exempli? ed by the binding interactions of SC558, an analog of celecoxib, co-crystallized in COX-2 active site [48]. The side chains occupy a nonpolar Please cite this article in press as: P. N. Sekhar, et al. Genome wide analysis and comparative docking studies of new diaryl furan derivatives against human cyclooxygenase-2, lipoxygenase, thromboxane synthase and prostacyclin synthase enzymes involved in in? ammatory pathway, J. Mol. Graph. Model. (2009), doi:10. 1016/j. jmgm. 2009. 08. 010 G Model JMG-5899; No of Pages 17 10 P. N. Sekhar et al. / Journal of Molecular Graphics and Modelling xxx (2009) xxx–xxx Fig. 8. Secondary structure alignment of human cyclooxygenase-2 (A), thromboxane synthase (B) and lipoxygenase (C) with the template 1PXX, 1TQN and 1LOX were predicted using SPDBV software suite. -Helices is represented in red color boxes and b-sheets is represented in blue color boxes. Please cite this article in press as: P. N. Sekhar, et al. , Genome wide analysis and comparative docking studies of new diaryl furan derivatives against human cyclooxygenase-2, lipoxygenase, thromboxane synthase and prostacyclin synthase enzymes involved in in? ammatory pathway, J. Mol. Graph. Model. (2009), doi:10. 1016/j. jmgm. 2009. 08. 010 G Model JMG-5899; No of Pages 17 P. N. Sekhar et al. / Journal of Molecular Graphics and Modelling xxx (2009) xxx–xxx 11

Table 2 Total docking scores of top 30 molecules of docked conformations of newly designed inhibitors based on 1H pyrrole, IH imidazole, thiophene, 1,3-oxazole, 1,3-thiazole group against crystal structure of mouse cylooxygenase-2 (PDB: 3PGH), predicted using OPENEYE software. Total score is sum of chemguass score, chemscore, PLP score, screenscore and shapeguass score. . Comp 1 4 5 6 7 8 9 10 11 12 14 19 20 21 22 23 24 R1 F Cl H H 2-Pyridyl 4-Pyridyl 4-NO2 4-NH2 4-NHMe 4-CH2OH F 4-CONH2 4-CO2H 4-OMe 3-Fluoro-4-methoxy 3-Fluoro-4-methoxy 5-Methyl-2-furyl R2 CO2H CH2OH CH2F OMe CF3 CF3 CF3 CF3 CF3 CF3 H CHF2 CHF2 CHF2 CHF2 CHF2 CHF2 A617. 7 R3 1H pyrrole A496. 29 A753. 16 A768. 27 A783. 38 A798. 49 A813. 6 A828. 71 A511. 4 A526. 51 A541. 62 A556. 73 A571. 84 A586. 95 A602. 06 A579. 5 A632. 28 A647. 39 1H imidazole A479. 18 A692. 36 A704. 9 A717. 44 A729. 98 A742. 52 A755. 06 A491. 72 A504. 26 A516. 8 A529. 34 A541. 88 A554. 42 A566. 96 A456. 26 A592. 04 A604. 58 Thiophene A527. 1 A415. 78 A410. 72 A405. 66 A400. 6 A395. 54 A390. 48 A522. 04 A516. 98 A511. 92 A501. 8 A476. 5 A466. 38 A461. 32 A670. 76 A451. 2 A446. 14 Furan A711. 86 A646. 1 A643. 36 A640. 62 A637. 88 A635. 14 A632. 4 A706. 38 A703. 64 A700. 9 A695. 42 A678. 98 A676. 4 A673. 5 A538. 27 A668. 02 A665. 28 1,3-Oxazole A547. 63 A528. 91 A527. 87 A526. 83 A525. 79 A524. 75 A523. 71 A546. 59 A545. 55 A544. 51 A543. 47 A542. 43 A540. 35 A539. 31 A520. 56 A537. 23 A536. 19 1,3-Thiazole A520. 74 A520. 38 A520. 36 A520. 34 A520. 32 A520. 3 A520. 28 A520. 72 A520. 7 A520. 68 A520. 66 -520. 64 A520. 6 A520. 58 A520. 54 A520. 52 25 26 27 28 29 H F H Cl H CHF2 CO2Me CH3 CH2OH CH2F A662. 5 A677. 61 A692. 72 A707. 83 A722. 94 A617. 12 A629. 66 A642. 2 A654. 74 A667. 28 A441. 08 A436. 02 A430. 96 A425. 9 A662. 54 A659. 8 A657. 06 A654. 32 A651. 58 A535. 15 A534. 11 A533. 7 A532. 03 A530. 99 A520. 5 A520. 48 A520. 46 A520. 44 A520. 42 Table 3 Docking scores of top 30 molecules of docked conformations of newly designed inhibitors based on furan group against crystal structure of mouse cylooxygenase-2 (PDB: 3PGH), homology models of human cyclooxygenase-2, lipoxygenase and thromboxane synthase, predicted using OPENEYE software. Total score is sum of chemguass score, chemscore, PLP score, screenscore and shapeguass score. . Comp 1 4 5 6 7 8 9 10 11 12 R1 F Cl H H 2-Pyridyl 4-Pyridyl 4-NO2 4-NH2 4-NHMe 4-CH2OH R2 CO2H CH2OH CH2F OMe CF3 CF3 CF3 CF3 CF3 CF3 R3 3PGH A711. 6 A646. 1 A643. 36 A640. 62 A637. 88 A635. 14 A632. 4 A706. 38 A703. 64 A700. 9 Human COX-2 A619. 24 124. 14 157. 93 191. 72 225. 51 259. 3 293. 09 A585. 45 A551. 66 A517. 87 Thromboxane synthase A652. 56 A1045. 42 A1061. 653 A1077. 886 A1094. 119 A1110. 352 A1126. 585 A680. 74 A706. 67 A720. 76 Lipoxygenase A599. 25 A373. 53 A363. 27 A353. 01 A342. 75 A332. 49 A322. 23 A588. 99 A578. 73 A568. 47 Prostacyclin synthase A705. 68 – A480. 4 A470. 16 A459. 92 -449. 68 A439. 44 A695. 44 A685. 2 A674. 96 Please cite this article in press as: P. N. Sekhar, et al. Genome wide analysis and comparative docking studies of new diaryl furan derivatives against human cyclooxygenase-2, lipoxygenase, thromboxane synthase and prostacyclin synthase enzymes involved in in? ammatory pathway, J. Mol. Graph. Model. (2009), doi:10. 1016/j. jmgm. 2009. 08. 010 G Model JMG-5899; No of Pages 17 12 Table 3 (Continued ) Comp 13 14 15 16 17 18 19 20 21 22 23 24 R1 CONH2 F H H H 4-SO2Me 4-CONH2 4-CO2H 4-OMe 3-Fluoro-4-methoxy 5-Methyl-2-furyl R2 Cl H F SO2Me NH2 CHF2 CHF2 CHF2 CHF2 3-Fluoro-4-methoxy CHF2 CHF2 R3 Cl H Cl H 3PGH A698. 16 A695. 42 A692. 68 A689. 94 A687. 2 A681. 72 A678. 98 A676. 4 A673. 5 A670. 76 A668. 02 A665. 28 Human COX-2 A484. 08 A450. 29 A416. 5 A382. 71 A348. 92 A315. 13 A281. 34 A247. 55 A213. 76 A179. 97 A146. 18 A112. 39 Thromboxane synthase A736. 993 A753. 226 A769. 459 A785. 692 A818. 158 A834. 391 A850. 624 A866. 857 A883. 09 A899. 323 A915. 556 A931. 789 Lipoxygenase A558. 21 A547. 95 A537. 69 A527. 43 A517. 17 A506. 91 A496. 65 A486. 39 A476. 13 A465. 87 A455. 61 A445. 35 Prostacyclin synthase A664. 72 A654. 48 A644. 24 A634 A623. 76 A613. 52 A603. 28 A593. 04 A582. 8 A572. 56 A562. 32 A552. 08 P. N. Sekhar et al. / Journal of Molecular Graphics and Modelling xxx (2009) xxx–xxx

CHF2 25 26 27 28 29 H F H Cl H CHF2 CO2Me CH3 CH2OH CH2F A662. 54 A659. 8 A657. 06 A654. 32 A651. 58 A78. 6 A44. 81 A11. 02 22. 77 56. 56 A948. 022 A964. 255 A980. 488 A996. 721 A1012. 954 A435. 09 A424. 83 A414. 57 A404. 31 A394. 05 A541. 84 A531. 6 A521. 36 A511. 12 A500. 88 Table 4 Total docking scores of top 30 molecules of docked conformations of newly designed inhibitors based on furan group against mutants of mouse cylooxygenase-2 (PDB: 3PGH), predicted using OPENEYE software. Total score is sum of chemguass score, chemscore, PLP score, screen score and shapeguass score. Comp 1 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 21 23 26 28 29 R1 F Cl H H 2-Pyridyl 4-Pyridyl 4-NO2 4-NH2 4-NHMe 4-CH2OH CONH2 F H H H 4-SO2Me 4-CONH2 4-OMe 5-Methyl-2-furyl F Cl H R2 CO2H CH2OH CH2F OMe CF3 CF3 CF3 CF3 CF3 CF3 Cl H F SO2Me NH2 CHF2 CHF2 CHF2 CHF2 CO2Me CH2OH CH2F R3 Arg120Ala A453. 08 74. 94 106 137. 06 168. 12 199. 18 230. 24 A422. 02 A390. 96 A359. 9 A328. 84 A297. 78 A266. 72 A235. 66 A204. 6 A173. 54 A142. 48 A111. 42 A80. 36 A49. 3 A18. 24 12. 82 Ser530Ala A454. 85 158. 47 190. 75 223. 03 255. 31 287. 59 319. 87 A390. 29 A358. 01 A1203. 15 A293. 45 A261. 17 A554. 2 A196. 61 A164. 33 A99. 77 A67. 49 A35. 21 A2. 93 29. 35 61. 63 93. 91 Ser530Met A454. 85 93. 91 126. 19 158. 47 190. 75 223. 03 255. 31 A422. 57 A390. 29 A358. 01 A325. 73 A293. 45 A261. 17 A228. 89 A196. 61 A164. 33 A132. 05 A99. 77 A67. 49 A35. 21 A2. 93 29. 35 Tyr355Phe A454. 85 158. 47 190. 75 223. 03 255. 31 287. 59 319. 87 A390. 29 A358. 01 A325. 73 A293. 45 A261. 17 A228. 89 A196. 61 A164. 33 A99. 77 A67. 49 A35. 21 A2. 93 29. 35 61. 63 93. 91 Cl H Cl H cleft in the entry channel and are bounded by Val116, Met113, Ile112, Phe357, Leu369 and Leu93. The inhibitors assume their ? al binding position in which the side chains occupy the central channel. Known inhibitors diclofenac, pyroxicam and nimesulide showed no inhibition against site directed mutagenesis of Ser530Ala enzyme but diclofenac exhibits effect on Arg120Ala, Tyr355Phe and Ser530Met enzymes. Diclofenac also showed activity against Arg120Ala and Ser530Met mutants of COX-2 [51,53]. Llorens et al. suggested two conformations of diclofenac bound to COX-2 in which the dichlorophenyl group projects down into the main channel of the active site [55]. Our docking scores suggest that Ser530Ala and Ser530Met are inhibited by pyroxicam.

This indicates that Arg120, Tyr355 and Ser530 are involved in interactions in extended conformations. Mutation in any one of these residues abolishes inhibition activity. It can be predicted therefore that these residues are important for binding in the COX Please cite this article in press as: P. N. Sekhar, et al. , Genome wide analysis and comparative docking studies of new diaryl furan derivatives against human cyclooxygenase-2, lipoxygenase, thromboxane synthase and prostacyclin synthase enzymes involved in in? ammatory pathway, J. Mol. Graph. Model. (2009), doi:10. 1016/j. jmgm. 2009. 08. 010 G Model JMG-5899; No of Pages 17

P. N. Sekhar et al. / Journal of Molecular Graphics and Modelling xxx (2009) xxx–xxx 13 Fig. 9. Binding of pyroxicam (A), nimesulide (B), naproxen (C), mefanamic acid (D), meclofenamic acid (E), ketorolac (F), indomethacin (G) and ibuprofen (H) and diclofenac (I) in the active site of mouse cyclooxygenase-2 enzyme (PDB: 3PGH). Ligands are represented in ball and stick model and residues are labeled in black colors. Protein is represented in green color. active site. Further, mutation of Ser530Ala and Ser530Met inhibits the activity suggesting that polar nitro group of Ser530 is involved in hydrogen bond interaction with nimesulide.

Our studies show that these molecules bind to COX-2 in an extended conformation involving interaction with all the residues. A suitable position of the two polar groups near Arg120 and in the hydrophilic side pocket seems to be important. Moreover, involvement of the phenyl ring in p–p interactions at the top of the channel probably leads to an enhanced COX-2 activity. Our studies and also existing literature demonstrate that three distinct anchoring sites contribute to substrate and inhibitor binding in the COX active site. The ? rst anchoring site lies at the junction of Arg120 and Tyr355 near membrane surface.

These residues drive the af? nity and orientation of inhibitors. The second major anchoring point is the side pocket, de? ned by residues Tyr355, Val523, His90, Gln192 and Arg513. The selectivity of diaryl heterocyclic inhibitors that contain phenylsulfonamides is in part determined by interactions in this side pocket. Similarities in the binding modes of these inhibitors support the possibility of substrate inhibition in COX-2. Mutation in any of the residues abolishes inhibitory activity suggesting that concerted interaction with all the three residues is perhaps essential for binding in the active site of COX-2.

For example, these inhibitors do not ef? ciently inhibit the Tyr385Phe mutant of COX-2. This indicates that Tyr385 plays an important role in positioning the side chain and enhancing its chemical reactivity with Ser530. Mutation of Ser530 to methionine causes Please cite this article in press as: P. N. Sekhar, et al. , Genome wide analysis and comparative docking studies of new diaryl furan derivatives against human cyclooxygenase-2, lipoxygenase, thromboxane synthase and prostacyclin synthase enzymes involved in in? ammatory pathway, J. Mol. Graph. Model. (2009), doi:10. 1016/j. jmgm. 2009. 08. 010 G Model

JMG-5899; No of Pages 17 14 P. N. Sekhar et al. / Journal of Molecular Graphics and Modelling xxx (2009) xxx–xxx Fig. 9. (Continued ). Table 5 Total docking scores of NSAIDS against wild and mutant types of cyclooxygenase-2 form mouse (PDB: 3PGH), predicted using OPENEYE software suite. Ligand Pyroxicam Nimesulide Naproxen Mefenamic Acid Meclofenamic Acid Ketorolac Indomethacin Ibuprofen Diclofenac Wild type A735. 64 A552. 94 A617. 8 A632. 24 A585. 39 A589. 29 A697. 99 A493. 85 A535. 97 Arg120Ala A697. 97 A576. 25 A604. 73 A620. 39 A557. 23 A565. 39 A712. 18 A488. 73 A561. 04 Tyr355Phe A735. 93 A584. 91 A551. 13 A621. 3 A569. 3 A592. 71 A748. 95 A472. 67 A560. 7 Ser530Ala A732. 95 A562. 71 A612. 38 A637. 95 A585. 39 A577. 75 A717. 75 A483. 87 A573. 22 Please cite this article in press as: P. N. Sekhar, et al. , Genome wide analysis and comparative docking studies of new diaryl furan derivatives against human cyclooxygenase-2, lipoxygenase, thromboxane synthase and prostacyclin synthase enzymes involved in in? ammatory pathway, J. Mol. Graph. Model. (2009), doi:10. 1016/j. jmgm. 2009. 08. 010 G Model JMG-5899; No of Pages 17 P. N. Sekhar et al. / Journal of Molecular Graphics and Modelling xxx (2009) xxx–xxx 15 Fig. 9. (Continued ). ncreased bulkiness compared to acetylated serine, which results in a decreased interaction with the inhibitors in the active site. This also causes better interactions with the inhibitors due to nonpolar groups of methionine compared to mutation of Ser530 to alanine. Mancini et al. [54] showed the importance of Ser530 of COX-2 for interaction with fenamate inhibitors, diclofenac and meclofenamic acid. It appears that Arg110, Glu218, Arg223, Ser518, Ile221, Tyr343, Glu344, Asn348, Pro406, Ala407, Phe472, Ser478, Ala519 and Leu520 are important determinant residues involved in hydrogen-bonding interactions with thromboxane synthase.

Docking studies with thromboxane synthase indicate that there is suf? cient space in the active site for simultaneous occupancy by diaryl furan derivatives in several possible combinations. Studies with lipoxygenase reveal that the sulfonamide group is oriented towards the entrance of the enzyme-binding site. The NH2 group of sulfonamide moiety is located with hydrogen-bonding contact range of the amino acid residues Lys286, His336, Ser359, Asp365, Lys454, Ser478, Cys626, Thr634, Arg213 and Gly215.

Docking studies with prostacyclin synthase show that inhibitors bind with Cd1 of Phe96, Cb of Leu103, NZ atom of Lys121, Nd2 of Asn287, main chain nitrogen of Thr358, NH1 of Arg359, Nd2 of Asn439 and main chain oxygen of Phe483 with less af? nity. The side chain of Leu103 faces the substrate-binding channel and the hydrophilic residue Asn287 is nearer to heme involved in substrate binding as shown by Chiang et al. [55]. Docking scores and hydrogen-bonding interactions show that these molecules act as best inhibitors against thromboxane synthase, COX-2 and lipoxygenase and display least binding af? ity to prostacyclin synthase. It is known that these molecules inhibit the levels of prostacyclin synthase in people undergoing myocardial infraction. Linear regression analysis between the docking scores and the experimental activities of known inhibitors showed a good correlation indicating that the docking scores are highly reliable. A model with the correlation coef? cient (r2) of 0. 62 was obtained for 8 compounds (diclofenac, ibuprofen, indomethacin, ketorolac, meclofenamic acid, mefanamic acid and pyroxicam) using the equation Y = A0. 006x A 1. 793. Removal of naproxen identi? d as outlier from the docking dataset yields a better model with correlation coef? cient (r2) of 0. 81. This good correlation demonstrates that the binding conformations and models of the known inhibitors with the crystal structure of mouse COX-2 are reasonable. Removal of ketoral from the dataset yields an excellent model with the r2 of 0. 96 and 0. 63 in Arg120Ala and Tyr355Phe. Similarly, removal of diclofenac from the dataset yields a good model with the r2 of 0. 59 in Ser530Ala mutants, respectively. Bioactivity of the molecules predicted using molinspiration server [46] shows (Table 5) ? e rotatable bonds with a molecular weight in between 300 and 500, log P value between 1 and 5, hydrogen bond donors between 2 and 6 and acceptors between 2 and 4 with zero violations. The drug Fig. 10. Correlation of experimental af? nities (IC50) and their docking scores against wild type (A), Arg120Ala (B), Tyr355Phe (C) and Ser530Ala (D) calculated using OPENEYE software. Please cite this article in press as: P. N. Sekhar, et al. , Genome wide analysis and comparative docking studies of new diaryl furan derivatives against human cyclooxygenase-2, lipoxygenase, thromboxane synthase and prostacyclin synthase enzymes involved in in? mmatory pathway, J. Mol. Graph. Model. (2009), doi:10. 1016/j. jmgm. 2009. 08. 010 G Model JMG-5899; No of Pages 17 16 P. N. Sekhar et al. / Journal of Molecular Graphics and Modelling xxx (2009) xxx–xxx Table 6 Biological activity values of top 30 2-cyclohexa-2,4-dien-1-yl-3-phenylfuran analogs calculated using molinspiration server. Drug likeliness and drug scores were calculated using OSIRIS server. Mol 1 4 6 13 14 16 17 18 27 R1 F Cl H CONH2 F H H 4-SO2Me H R2 CO2H CH2OH OMe Cl H SO2Me NH2 CHF2 CH3 R3 Mi log P 3. 2 1. 2 0. 7 3. 6 3. 4 0. 5 0. 4 2. 7 0. 7 TPSA 110. 6 93. 5 82. 5 116. 3 73. 3 73. 107. 4 99. 3 73. 3 N atoms 25 24 23 26 22 22 26 22 22 MW 361. 3 362. 8 328. 3 411. 2 317. 3 312. 3 410. 8 314. 3 312. 3 N ON 6 5 5 6 4 4 6 5 4 N OHNH 3 3 2 4 2 2 2 4 2 N rotb 4 4 4 4 3 3 4 3 3 Vol 283. 0 286. 4 274. 1 308. 4 256. 0 265. 1 310. 1 262. 4 265. 1 Drug likeliness 0. 36 2. 03 1. 2 2. 67 0. 32 1. 08 2. 11 1. 11 1. 08 Drug score 0. 49 0. 54 0. 52 0. 45 0. 4 0. 43 0. 4 0. 48 0. 53 H Cl Cl H likeliness of the molecules was predicted using the OSIRIS server [47], which is based on the comparison of the 5000-marketed drugs (positives) and 10,000 carefully selected non-drug compounds (negatives).

It shows that molecules 1, 4, 6, 13, 14, 16, 17, 18 and 27 contain positive scores within the range of 0–2. 6. These results indicate that the above-mentioned molecules act as drugs and inhibit the activity of COX-2 and thromboxane synthase enzymes. The results also show that molecule with chlorine and CH2OH in the R1 and R2 position binds with high af? nity with thromboxane synthase and COX-2 but shows no af? nity against prostacyclin synthase. This indicates that this molecule may be the best inhibitor for COX-2 and thromboxane synthase.

These data indicate that 2-cyclohexa-2; 4-dien-1-yl-3-phenylfuran analogs inhibit the enzyme in lesser concentrations in the biological system and thus may help people undergoing myocardial infarction. Thus, it is hoped that these newly designed molecules if synthesized and tested in animal models hold promise for antiin? ammatory and antithrombosis activities. 5. Conclusions The prostaglandin endoperoxide H synthases-1 and 2 (PGHS-1 and PGHS-2; also called COX-1 and COX-2) catalyze the committed step in prostaglandin synthesis. COX-1 and 2 are of particular interest because they are the major targets of non-steroidal antiin? mmatory drugs (NSAIDs) like aspirin, ibuprofen and the new COX-2 inhibitors. Inhibition of COX with NSAIDs acutely reduces in? ammation, pain, and fever. But long-term use of these drugs reduces thrombotic events and increases the development of colon cancer and Alzheimer’s disease. Our docking studies identi? ed potential anti-in? ammatory agents among the diaryl furan derivative class of compounds that act through a COX-2 and thromboxane synthase inhibition mechanism. The results indicate that 4-(2-phenyl-3-thienyl) benzene sulfonamide; 4-(2-phenyltetrahydrofuran-3-yl) benzene sulfonamide possesses signi? cant anti-in? mmatory and antithrom

Video Games and Violence “essay help” site:edu: “essay help” site:edu

Persuasive Speech Outline Template Your Name: Michael A. Southerland COMS 101 Section _D__ Date Due: Aug 2, 2011 Organization:This speech uses problem-solution organization. Audience analysis:The average age of the audience is between 28 and 35 years of age with ages ranging from 18 to 35. The audience consists of 2 females and 1 male. Central Idea:Video game violence and children. Specific Purpose:To inform my audience about videogame violence and children and to persuade parents to become more knowledgeable about game violence. Introduction: I. Attention-getter A.

As parents we protect our children from watching all types of media from T. V, DVDs, and the internet. In today’s culture the media today displays tons of violence, sex, and profane language that we don’t want our children to learn from. But what do parents do about video game violence in their home? B. When it comes down to videogames, parents neglect their children by not becoming aware with what their kids are buying. As a parent and videogame player, I’m aware of the many violent videogames that are being sold to children and teens. Many of these games sold require no ID verification, or parents consent which raises a huge moral issue.

II. Establish Credibility – Through my personal experience of playing violent video games and research I am here to explain the importance of informing parents to become more involved on the purchasing of video games and to persuade them to be more responsible to what they allow their children to play. II. Thesis Statement – Even though we don’t expect our children to commit violent acts as they mature into young adults, why do parents allow children to interact with a violent game which focuses on death, gore, foul language, and adult situations?

III. Preview Statement – Today I would like to give you some information about the following: A. Today’s video game culture B. How violent video games effect our youth C. Tips for parents to avoid problems with violent video games Body: I. Today’s video game culture A. Video Games are a huge part of the culture today and have been for the past four decades. Kids and teens enjoy spending a lot of time playing them and most parents find that it’s a safe way for them to have fun and to keep their children occupied and out of trouble.

Although this statement can have some truth to it, there are also some bad things that follow that parents are not aware about when they buy a game for their child. B. Part of the reason why parents are not aware of how games can be so violent is because they do not spend time with their children playing their games because after all they are for kids. Another reason is that most of the time the artwork on the cover of the game can be misleading and innocent. C.

Many games today promote negative themes of killing people or animals, use of drugs and alchohol, criminal behavior or disrespecting the law, sexual exploitation and abuse to women, foul language, and obscene gestures. II. How violent video games effect our youth A. Numerous studies by researchers show that kids who play violent video games for an extended amount of time bring out aggressive behaviors in children, they are more prone to fighting with their peers, exercise less and become over weight, have a harder time paying attention, and sometimes struggle academically.

B. According to Iowa State University Professor of Psychology Craig Anderson has been studying how violent video game play affects youth behavior. The new study he led, analyzing 130 research reports on more than 130,000 subjects world wide, proves conclusively that exposure to violent video games make more aggressive, less caring kids—regardless of their age, sex, or culture. C. According to games editor Kristin Kalning, researchers at the Indiana School of Medicine say that brain scans of kids who played violent games showed an increase in motional arousal – and a corresponding decrease of activity in brain areas involved in self control, inhibition and attention. D. Bernard Cesarone’s article on Video Games and Children says that a research review done by NCTV found that 9 out of 12 research studies on the impact of violent video games on normal children reported harmful effects. III. Tips for parents to avoid problems with violent video games A. The Entertainment System Rating Board/(ESRB) game rating system helps define what type of content is within games being purchased.

These ratings are similar to movie ratings but apply to video games. (e. g. M=Mature audience T=For Teen audience) B. Become more involved with your child in video gameplay, to experience the games content. C. Provide clear rules about game content and playing time both inside and outside your home. IV. There are four ways you can prevent your child from over exposure to playing violent video games. A. Become involved in playing video games with your children. B. Use the ESRB rating system for all of your games. C. Do some research on games before buying them. D.

Select appropriate games including content and level of your Child’s development. Conclusion: I. Action – A. Protect your child from video game violence by: 1. Using the ERSB rating when purchasing game titles. 2. Be a role model for your Child including all video games you play as an adult. 3. Talk to other parents about your video game rules. 4. Become involved with your Child’s video games. II. Summary Statement – So today we have learned A. Today’s video game culture D. How violent video games effect our youth C. Tips for parents to avoid problems with violent video games III.

Call to Action – To prevent your child from exposure to video game violence I challenge you to become more active with your Child’s video game play even if you do not have interest in them do it for them. I challenge you to make use of the video game rating system to ensure that your child is not partaking in game violence. Finally, I challenge you to become more aware that video game violence should not be taken lightly because As parents we want to raise our children with the best standards we can possibly provide for them. Works Cited: Cesarone, Bernard. “Video games and children. ” Kids source online. 28 Feb 2006

An Analysis of Transformational Leadership essay help site:edu: essay help site:edu

An Analysis of Transformational Leadership BSP045 Work Psychology B010898 Cheng Chen Introduction Since the early 1980s, there has been an explosion of interest on transformational leadership among scholars and managers. It is shown with evidence that the desire and effectiveness of transformational leadership style are universal (Den Hartog, et al. , 1999, and Bass, et al. 2006). This leadership style, as its name implies, is a process which tends to change and transform individuals (Northouse, 2004).

To help followers grow and develop into leaders, transformational leaders respond to individual followers’ needs and empower them (Bass, et al. 2006). It is also concerned with emotions, values, ethics, standards, and long-term goals (Northouse, 2004). Recently, some researchers (Charbonnier-Voirin, et al. , 2010) mentioned that transformational leaders might have a desire to customize coaching, which could be conducted through telling each associate’s unique capability and intelligence and inspiring each person’s innovation and critical thinking.

The topic area has been widely discussed and analysed from many different sources and as such provides an interesting topic area to research and discuss further. This report will briefly introduce and outline the development of transformational leadership concept and theory, then examine the conceptual and empirical validity of transformational leadership in a global context. Initially, this report will begin with defining key terms in transformational leadership, compared with transactional leadership and other relevant concepts, in order to better understand the context of the text which will be covered.

The Bass’s transformational model of leadership including its four components and the instrument relating to it, the Multifactor Leadership Questionnaire (MLQ), will then be reviewed. After that, both at conceptual and empirical level, analysis will be conducted to evaluate to what extent this model can help with the successful management of people at work, especially in cross-cultural environment. Finally, a summary will be conducted and further implications of findings will be suggested. Transformational Leadership Model and Measurement

Although Downton first created the term “transformational leadership” in 1973, not until 1978 when the political sociologist James MacGregor Burns’ book named Leadership was published, this approach had been emerged with its importance. In his work, Burns (1978) distinguished transactional and transformational leadership. The former one focuses on the social exchanges that occur between leaders and their followers, for example, politicians leading by “exchanging one thing for another: jobs for votes, or subsides for campaign contributions” (Burns 1978).

On the other hand, the latter one refers to the process whereby an individual stimulates and inspires others and creates a connection that leads to an improvement of motivation, morality and capability in both leaders and followers (Northouse, 2004). At the same time, House (1976) coined a theory of charismatic leadership which received a widely attention in leadership academic world (Hunt and Conger, 1999). Later, this concept is often used as a similar term of transformational leadership. As House suggested, charismatic leaders act in unique ways and as personal characteristics affecting their followers.

The specific characteristics include being dominant, self-confident, moral and so on (Northouse, 2004). A more expanded and refined version of transformational leadership was provided by Bass in 1985, which to some extent was based on the prior works of Burns (1978) and House (1976) (Northouse, 2004). Bass (2006) highlighted that, “to engage the follower in true commitment and involvement in the effort at hand”, leaders must deal with the follower’s sense of self-esteem, which was what transformational leadership went beyond the social exchange in transactional style.

He also emphasized that although charismatic leadership was to a large extent in common with transformational leadership, the former was only part of the latter. As refinements made in both the conceptualization and measurement of transformational leadership, Bass (2006) summarized that, to achieve superior results, transformational leadership is a combination of four measurable components: Idealized Influence (charisma), Inspirational Motivation, Intellectual Stimulation, and Individualized Consideration.

In order to measure these behaviours, the Multifactor Leadership Questionnaire (MLQ) was developed and identified the four factors (Bass and Avolio, 1990): ?Idealized Influence (charisma): Acting as strong role models for followers, transformational leaders behave in ways that make them being “admired, respected and trusted” and “extraordinarily capable, persistent, and determined”, which make their followers want to emulate them. Inspirational Motivation: Transformational leaders articulate a vision appealing for followers and motivate and inspire them by providing task meaning, communicating optimism and enthusiasm for a future orientation. ?Intellectual Stimulation: Transformational leaders stimulate followers to be creative and innovative, to doubt assumptions, to apply old problem solutions in new means. ?Individualized Consideration: Transformational leaders provide a supportive climate by paying attention to each follower’s needs and desires.

They actively help followers grow through personal challenges and create new opportunities for their potential development (Alimo-Metcalfe, Alban-Metcalfe, 2002). Two transactional components are also included in the MLQ: ?Contingent reward: Approved follower actions, which mean that followers finish what needs to be done, are rewarded with the payoffs for doing it, and disapproved actions are punished because of the opposite behaviours as an exchange process between leaders and followers. ?Management by exception: Corrective transactional dimensions.

Active management by exception is the behaviour that a leader monitors followers closely for mistakes and intervenes with corrective direction. Passive form involves correction only after requirements have not been met or problems emerge. On the active-passive leadership continuum, the full range places transformational, transactional, and laissez-faire leadership, of which the last one represents the absence of leadership. Originally from French, “laissez-faire” is a phrase which implies a “hands-off, let-things-ride” approach. In this way, leaders take no responsibility, provide no feedback, and ignore followers’ needs (Northouse, 2004).

Considering a global context and culture variation, Bass (1997) argued that transactional and transformational leadership can transcend all parts of the globe and all forms of organizations. Advantages of Transformational Leadership After a long time development and refinement, the Transformational Leadership model and instrument have been widely used, because it has several strengths as follows: First, plenty of both qualitative and quantitative studies for transformational leadership have been conducted from a wide range of perspectives.

The objectives cover from outstanding leaders to multinational corporation CEOs (Northouse, 2004). A recent keywords analysis of all the articles published from 1990 to 2003 in the PsycINFO database showed that the number of studies related to transformational or charisticmatic leadership was larger than the number of all other well-known theories of leadership (e. g. , least preferred co-worker theory, path-goal theory, normative decision theory, substitutes for leadership) combined (Judge and Piccolo, 2004).

Second, it is convinced that the effectiveness and validity of transformational leadership is exclusive according to numerous evidences (Yukl, 1999). It is proven in a meta-analysis of 39 studies (22 published and 17 unpublished) which used MLQ that individuals in transformational leadership styles were perceived to be more effective leaders with better work outcomes compared with the ones who exhibited only transactional leadership (Lowe, Kroeck and Sivadubramaniam, 1996). Precisely, for transformational leadership dimensions, validity for charisma was . 1 and validity for intellectual stimulation was . 60; while . 41 for contingent reward and . 05 for management by exception were analyzed for transactional leadership. Moreover, in order to explore the relative validity between transactional leadership and transformational leadership, Judge and Piccolo (2004) conducted a meta-analysis which covered the whole leadership continuum. Results showed that the validity for transformational leadership was . 44, the highest score overall, whereas the second highest validity was . 39 shown by contingent reward leadership.

In addition, it is also approved that transformational leadership model is valid across different environments. Lowe, Kroeck and Sivadubramaniam (1996) have shown that either for senior or basic leaders in both public and private context, the transformational leadership findings can be endorsed. Judge and Piccolo (2004) highlighted that in various study settings, the validity of transformational leadership appears to generalize with slight differences across from business professionals, university students, the military and public participants.

Third, transformational leadership have positive relationships with follower satisfaction and organization performance. Transformational leadership regards leadership as a process. By setting more challenging expectations for followers, transformational leaders motivate others “to go the extra mile” (Leong and Fischer, 2011). Followers act more prominently through the leadership process with an instrumental attribution (Bryman, 1992). Their needs and desires are more concerned by the leaders.

A number of empirical findings from last century have demonstrated that charismatic, transformational and visionary leaders tend to have positive influences on their organizations and followers. The effect scores range from . 35 to . 50 for organizational performance effects and from . 40 to . 80 for effects on follower satisfaction and commitment (Fiol, et al, 1999). Another two meta-analytical studies also approve this statement (Fuller, et al, 1996; Lowe, et al 1996).

More precisely, in a more recent study, Judge and Piccolo (2004) compared the correlation between transformational leadership and follower job satisfaction and the correlation between transformational leadership and organization performance. The results showed that the former relationship (. 58) is stronger than the latter one (. 23) (Judge and Piccolo, 2004). Besides, transformational leaders tend to motivate and inspire each person’s innovation and critical thinking (Charbonnier-Voirin, et al. , 2010). A new study (Wang and Zhu, 2011) has focused on the relation between transformational leadership with individual and group creativity.

From the survey data which were collected from multiple means in a main city in the southern part of the U. S. A. , it is shown that there are important and positive correlations for aggregated group-level transformational leadership with group creative identity (r = . 34, p< . 01), individual creative identity (r = . 20, p < . 01), and individual creativity (r = . 16, p < . 01). Their findings also proved that individual-level transformational leadership can improve followers’ creativity by building individuals’ creative identity (Wang and Zhu, 2011).

Fourth, transformational leadership differs from other styles on the aspect of its strong emphasis on the followers’ needs, values, and morals dimension. Burns (1978) argued that transformational leaders move others by motivating them to take higher moral responsibility and by aligning their own and followers’ value systems with significant moral standards. This kind of leaders also demonstrates “high standards of ethical and moral conduct” (Avolio, 1999, p. 43). The influence of transformational leadership on ollower moral identity is fundamental and central for this theory (Bass, 1985, 1998; Bass & Riggio, 2006; Bass & Steidlmeier, 1999; Burns, 1978). To fill the gap of only few empirical studies examining to what extent leadership influence followers’ moral development, a study using field survey data and experimental data was newly conducted this year by Zhu, Riggio, Avolio and Sosik (2011). The descriptive statistics illustrated an important positive relationship between follower moral identity and transformational leadership (r = . 0, p < . 01). As one of the first empirical studies that focused on the influence of transformational leadership on follower self-reported moral viewpoints, this study also discussed several practical implications. The first approach is to set high moral principles, in that case leaders tend to enhance followers’ moral identity, and consequently, follower ethical decision making and behaviours would be developed. It is also shown that leaders’ behaviours affected the level of followers’ moral identity.

Therefore, the second approach is to develop transformational leadership across boundaries within the organization. An ethical climate would be built with strong moral principles and aims by transformational leaders, through setting policies, procedures and processes. A positive impact on follower moral identity, in that case, would be likely to happen (Zhu, et al, 2011). Fifth, from a practical and applicable perspective, the attributes outlined in transformational leadership and the traits included in the MLQ can provide a broad set of concepts as typical transforming leaders.

These components can be utilized in several stages of Human Resource Management process, as standards of recruitment, selections and promotion, or as principles of training and development (Northouse, 2004). It is found that for low-level leaders, the process of building a vision is particularly valuable in training programs (Lowe, et al. , 1996). Additionally, some researchers believe that an expanded picture of leadership is provided by transformational leadership approach, which contains the social exchange between leaders and associates as well as the attention on needs and development of followers (Avolio, 1999; Bass, 1985).

Northouse (2004) states that transformational leadership has intuitive appeal, which means that, as the way described in the transformational perspective, the leader tends to advocate change and consider growth for others, which is consistent with society’s expectation for a typical leader. Criticisms of Transformational Leadership Although transformational leadership model has been widely used and had a great contribution to the leadership literature, it also has several drawbacks: The first criticism is that its conceptual clarity has been criticised in terms of its poorly defined parameters (Northouse, 2004).

Because it involved a large range of behaviours, such as creating a vision, building trust, acting as a social architect and so on, it is difficult to clearly define the parameters. Tracey and Hinkin (1998) emphasized on the overlap among the core four components (idealized influence, inspirational motivation, intellectual stimulation and individualized consideration). Yukl (1999) also demonstrated that it is necessary to distinguish the four factors in a theoretical way.

Bryman (1992) highlighted that transformational and charismatic leadership are often used as synonymous words, though, Bass (1985) has already cleared that charisma is only one component of transformational leadership. An recent study conducted by Wu, Tsui and Kinicki (2010) indicated that individualized consideration and intellectual stimulation were more suitable for behaviours at an individual level, by contrast, idealized influence and inspirational motivation are more suitable at a group level. Some other criticisms exist on the measurement of transformational leadership.

The validity of MLQ has been questioned even if it has been wildly used (Tepper &Percy, 1994). The time when MLQ was designed was criticized by Hunt (1996), because it was before collecting enough data on the nature of transformational leadership in qualitative and quantitative means. Hunt (1996) also stated that descriptions of leader actions and the results of behaviours were both included in the MLQ and the model failed to provide sufficient attention to the two-way respects of the relations between leader and follower.

The correlations between the four factors of transformational leadership (idealized influence, inspirational motivation, intellectual stimulation and individualized consideration) are very close to each other, so they has been questioned as not distinct factors (Tejeda, Scandura, & Pillai, 2001). Moreover, there is no clear distinction between transactional factors and transformational factors. Hence, some of these factors are not unique to this model. Race and gender invalidity is also concerned, because the MLQ ame from interview data from 70 South African leaders, while 69 of them were white and all of them were men. However, although the measure instrument MLQ of transformational leadership has been criticized in the way it was used, the MLQ is, at the same time, developing. Versions with new, improved items have been generated as promised (Tejeda, et al, 2001). A third criticism some have made is that in a global context, cultural differences do have an effect on the factors which might be perceived in particular cultural settings (Alimo-Metcalfe & Alban-Metcalfe, 2002).

Den Hartog and other researchers (1999) had proven by study that certain attributes of transformational leadership were adoptive across cultures, while others did not; however, they believed that even if some transformational attributes might exhibit in different manners across cultures, a common preference for transformational leadership exists all over the world. Recent findings deeply explored whether transformational leadership dimensions are universal or not. A more up-to date research conducted by Leong and Fischer (2011) has found that power distance is strongly related with transformational leadership factors (? = ?. 42, p

The Impact of Internet on Politics college essay help free: college essay help free

Impact of Internet on Politics The use of internet in the 2004, and most recently in the 2008 elections was so huge due to its role. The internet has significantly changed the political process because it allowed candidates and voters to connect and gain access to political process in a fashion not previously available. Facebook, Twitter, MySpace, and candidates own social networking sites, such as Senator Barack Obama’s MyBarackObama. om and Senator John McCain’s McCainspace, were used during the 2008 presidential elections campaign to connect to the voters, raise funds, post campaign ads, organize meetings. This usage of the internet has enhanced the degree of participation of interested ordinary citizens and small interest groups in politics. It also gave the average citizen the opportunity to be engaged politically via airing opinion on platforms including YouTube, iTunes, and Facebook by uploading personal videos in support of or against a candidate.

Based on the result of the research by Pew Internet & American Life project on the role of internet in the 2008 elections, ” Some 74% of internet users–representing 55% of the entire adult population–went online in 2008 to get involved in the political process or to get news and information about the election. ” These statistics are an indication of internet influence on the political process because it allowed and made it easier for more people to participate in political activities.

In recent years, blog sites have been the avenue for political discourse on the internet, they have reshaped the way politician, and the populace approached the political process. Candidates are now turning to the use of blog as evident during the 2008 presidential election, when all candidates maintained a blog site, for example, Senator Hillary Clinton’s blog. hillary. com, Senator Barack Obama’s my. barackobama. com, and Senator John McCain’s www. johnmccain. com/blog.

Candidates used it to update and inform the public and supporters of their views of current events and issues that are important to them, which enabled candidates to be more transparent and communicative. According to ABC news report, in the second quarter of 2003, Gov. Howard Dean raised $7. 6 million toward his 2004 presidential election campaign through the internet. This was a huge amount for a candidate that lacked a traditional fundraising network. Fundraising for the 2008 elections through the internet was more successful compared to previous elections.

Jose Antonio Vargas of Washington post Obama Raised Half a Billion Online report states that Triple O, Obama’s Online operation revealed that, “3 million donors made a total of 6. 5 million donations online adding up to $500 million. ” This online donation is more than 80 percent of the record-breaking $600 million raised during the entire campaign. The affordability and easy access of the internet compared to older technology enabled candidates to reach more people faster at nearly zero dollar and also allowed individuals to react quickly, directly, anytime, and anywhere.

For example, Jose Antonio Vargas’s report also quotes data from OPOs – Online Political Operatives below “Obama’s e-mail list contains upward of 13 million addressees. Over the course of the campaign, aides sent more than 7,000 messages, in total more than 1 billion e-mails landed in inbox. ” In future elections, candidates’ successes at the poll might be determined by their ability to successfully use the internet to reach out and connect to broader audience, as it contributed to President Barack Obama success at the 2008 election.

Older technology such as radio and television are still relevant but the internet also played a major role in the recent political process because it enhanced the participation of all and sundry during the 2008 elections. Work cited http://abcnews. go. com/sections/politics/TheNote/TheNote_July16. html http://www. pewinternet. org/Reports/2009/6–The-Internets-Role-in-Campaign-2008. aspx Obama raised half a billion online can be found http://voices. washingtonpost. com/44/2008/11/20/0bama_raised-half_a_billion_on. htm

Psy240 Appendix C essay help writer: essay help writer

Axia College Material Appendix C Petra Koenig PSY240 March 26, 2011 The Sleep Matrix Why do we sleep? What governs when or how long we sleep? This activity will assist you in understanding two common sleep theories, recuperation and circadian, which provide different answers to these questions. Depending on which one you support, it may change your outlook on sleep and your current sleeping habits. Categorize each characteristic under the correct theory—recuperation or circadian—by placing an “X” in the appropriate column. Then, answer the questions that follow. |Recuperation |Circadian | |Sleep restores the body to a state of | | | |homeostasis. |X | | |Sleep plays no role in physiological | |X | |functioning. | | | |We become tired when it is dark out. |X | |Function of sleep is to restore energy |X | | |levels | | | |Function of sleep is to conserve energy | |X | |We become tired from wakefulness. X | | |We sleep until the body is physiologically |X | | |sound. | | | |We sleep based on an internal timing | |X | |mechanism. | | |Sleep depends on vulnerability from | |X | |predators. | | | |Sleep deprivation may cause behavioral |X | | |disturbances. | | |We have a sleep-wake cycle. | |X | |When we sleep is based on some evolutionary| |X | |aspects. | | | . What are the main differences between the recuperation and circadian theories? Recuperation- in common usage, refers to a period of recovery. A circadian- rhythm is a roughly-24-hour cycle in the biochemical, physiological or behavioural processes of living beings. 2. Which theory do you most agree with? Explain. I agree with Recuperation theories because it seem more reliable and make sense to me.

The Symbolic Scarlet Letter aqa unit 5 biology synoptic essay help: aqa unit 5 biology synoptic essay help

The Symbolic Scarlet Letter Hyatt Waggoner, a noted Hawthorne scholar, says, The Scarlet Letter is Hawthorne’s most widely read and admired novel and is also the one that has inspired the most inconclusive debate . . . ” (Waggoner 118). Much of the trouble in interpreting The Scarlet Letter stems from the fact that the story is highly symbolic. The Scarlet Letter opens with the stark image of the throng of people surrounding the prison door. Hawthorne creates a mood by using the, “sad colored,” garment and, “gray, steeple crowned hats,” to give the reader a feeling a gloom and sadness.

Among these dark, sad images Hawthorne interjects the wild red rose. As Hawthorne puts it, “to symbolize some sweet moral blossom, that may be found along the track, or relieve the darkening close of a tale of human frailty and sorrow” (McMichael 1033). The prison is symbolic of moral evil which would be sin and the cemetery is a symbol of natural evil which would be death. It is commonly agreed that the colors are used extensively in The Scarlet Letter as symbols. This is illustrated by the scene by the prison door, but the use and importance of the symbol grows as the book moves along.

Pearl, is often identified with the color red, which Waggoner identifies as evil. Pearl is not an evil child in the true sense of the word, but she is a reflection of her parent’s immorality and their love. The color “red with images of bright glow “shows Pearl to be the product of a moment of passion between Hester and Dimmesdale. Just like the red rose at the start of the story, Pearl is meant to relieve the sorrow and misery. The most famous symbol is of course the scarlet letter itself. Called, “The Elaborate Sign,” by Waggoner, the letter A exhibits itself a number of times and in a number of ways throughout the story.

The A may appear on Dimmesdale’s chest, it appears as Pearl, in the sky as a huge letter formed by a comet; in the mirror at the Governor’s mansion; and on Hester’s tombstone (McMichael 1150). The letter itself is red, which at first glance would seem to confirm Mr. Waggoner’s theory that red in the story is a representation of evil. A case can be made, however, that even in the letter A that red is symbolic of hope and spirit. The scarlet letter is at once both the source of Hester’s shame and disgrace and the source of her strength.

Not only does it suggest the seed out of which Pearl grew, but it is a symbol of Hester doing the right thing in being humbled for her indiscretions. In conclusion, whether or not Hawthorne would intentionally picture a woman and a sinner as a Christ figure is not a question that can be answered within the scope of this paper. The similarities are too strong to ignore. The red of the A is representative of Christ’s blood. Hester, like Christ, went to her cross in satisfaction of another’s sins. The problem of Christ being sinless and Hester not is solved by Hawthorne, as he portrays Hester as the highest moral character in the novel.

Strength and Weakness college essay help free: college essay help free

Strength and Weaknesses Michael Bartlett Gen/200 8/8/2011 James Bailey Strength and Weaknesses Every individual has personal strengths and weaknesses that show his or her life in a positive or negative way. Everyone has to understand and be able to use his strengths and weakness in a good way to succeed in life. Our everyday life, whether at work or at home is affected by how we use our strengths and weakness. We can improve our strengths and work on our weakness to achieve positive goals in life.

The most successful people know that by working on their weaknesses can bring significant results. I learn more and more about my strengths and weaknesses everyday. I was raised in a big, held together family with family values and knowing what‘s right and what’s wrong. That shaped my life in a significant way to how I approach everything from work to personal relationships. I learned to respect everybody regardless of his or her beliefs and different views. That helps me at work to listen to my teammates different ideas and work with them in a positive way to achieve results.

I am also ambitious and have strong loyalty to my family and friends. My ambition to be a better person and succeed in life made me start school as well as trying to look for work full time work and helping my parents out with their needs. I am strong willed and want to achieve the most in life. I was working at my current job for 5 years and have been promoted twice fast, then I got medically discharge from the navy, and it made me very upset. My strong will help me climb up the ladder and be a better person.

My respect for my teammates earned me their respect and has created a pleasant and effective work environment. My loyalty has given me close friends who have been my friends for a long time and have helped in tough times and remained close in good times. I have a big heart and never hesitate to help any of my friends in their time of need. I love children and love taking out my son, nieces and nephews to movies and park. My family gives me strength and good heart to help others. Patience is another of my strengths.

I have learned to be patient with people and not get easily frustrated. This has helped me a lot at my work and working in a team environment. Patience gets work done. I try to stay calm in whatever I am doing and focus all my energy in completing my tasks. This has helped me a lot get through tough times. I have weaknesses too like any other person. I have come to realize that I can trust someone too quick and only see the good side of other people. I had friends who betrayed my trust when I did everything to help them and be true to them.

This has made me cautious to opening up to people but I am still the same person with a generous heart, who still has affection for other people. I have not let these weaknesses change me as a person. I also take some things too serious when I should not. In personal relationships and work, I have realized that I get too much involved in things that do not require all my focus and energy. I have let simple, unimportant things affect me in a negative way. I am trying to take things easy and not let simple things bother me.

I also have a bad memory. This has caused me embarrassment sometimes, but I am trying to work on it. Sometimes making it a point to take notes so I do not forget. One who identifies and learns from his or her mistakes has the best chance to succeed in life. Nobody is perfect and everybody has areas that need improvement. In writing this essay, I had time to do some critical thinking about me as a person. I think I need to be a better judge of character and not put too much trust into friendships, which end up not being worth it.

I also need to focus my energy on things that are truly important to me, my family and not waste my time on issues. I have drawn out a plan of action for improvement in areas mentioned above. I am going to approach new friendships with extra caution and not put too much trust too fast. I am going to take a step back and take things easy as they come. That would certainly reduce a lot of unnecessary stress. References This is a hanging indent. To keep the hanging indent format, simply delete this line of text using the backspace key, and replace the information with your reference entry.

Reaction Paper on ‘Leadership’ essay help free: essay help free

Righteous leaders are rare and seemingly belong to a class of their own. But when is a leader a righteous leader? What makes one virtuous? A lot of people have already listed traits that make up a good leader and the article provided was just one of them. Even the Bible had summed up the qualified traits that make up a righteous leader.

Titus 1:7-9 talks about the leader that God required to lead – that the man chosen to have the responsibility to lead his people must be blameless, not self-willed, not soon angry, not given to wine, no striker, not given to filthy lucre but a lover of hospitality, a lover of good men, sober, just holy, temperate; holding fast the faithful word as he has been taught, that he may be able by sound doctrine to exhort and convince the gainsayers. Such were the words to describe an honorable leader – a person that offers service and does not abuse power.

He does not maim, oppress and destroy compared to power- drunk leaders. He is never arrogant but humble. Being humble is one of the “must be attitude” of a true leader and it’s quite a challenge for a person to be humble when he is already a boss. As the wise saying goes… “You can know the true nature of a man when you give him money and power…” Wicked leaders are defined in most banal of senses as compared to righteous leaders. Listed below are only a few of my understanding from the literary piece which had provided the good points befitting an upright leader. Who cannot be bought and whose world is their promise…” – they have the trait called genuineness. I believe this is the very foundation of a leader. Leadership begins and ends with this trait. It’s because they are genuine people that they manage to lead their people into what they believe is right and because of this that they also gain true and loyal subordinates. “Who put character above wealth…” – righteous leaders are guided by their heart as well as their mind and are committed to empowering not only themselves but also their subordinates in making a difference.

They are more interested in that rather than in power, money or prestige. “Who will be honest in small things as well as in great things and who will make no compromise with wrong…” – these leaders never compromise the truth. They do not bargain towards their favor. They are honest and do not twist the truth. They refuse to compromise when principles are tested. “Whose ambitions are not confined to their own selfish desires…” – Leaders are selfless people who never think of what they can give or what they can gain. They lead because they wanted to lead their eople and not because of some hidden motives behind the veil of sugar-coated words. “Who will not lose their individuality in a crowd…Who will not say they do it because everybody else does it…who can say No with emphasis although the rest of the world says Yes…” – true leaders are never swayed by the majority. Rather, they are very clear in where they stand and it is also because of this trait that people follow them. As was stated, they are in a league of their own and that’s why they were never swallowed by the majority and can stand out among them.

Instead, they belong to the minority group – people who stand on their own, who have the power to refuse the temptation of majority and it’s what made them unique and able to lead with a clear sight. True leaders lead by example. They lead with a purpose, their goal is clear in sight and they were never partial. They are fair in judging people and never play bias based on religion, color or any mundane stuffs. They are willing to listen from others and never think they know it all. They are fairly considerate and empathic.

Perhaps the best example to illustrate the difference of being led by a wicked and righteous leader is once again found in the Bible. It was set right after the reign of Solomon. Solomon was considered the wise king and through him, he righteously led Israel until his death but following that, Israel was thrown into chaos when his son took over the kingdom. Israel was divided due to Rehoboam’s harsh leadership. It was only three generations later, when his great grandson reigned Judah that the kingdom experienced peace that Israel once felt under the leadership of Solomon.

Aside from what was already mentioned in the literary piece, one characteristic that sets righteous ones apart from common leaders is their ability to recognize their own flaws and works hard to overcome them. They use their natural abilities in leading their people and acknowledge their own shortcomings. There are still many ways to define a righteous leader and it could always go on and on but it always talks about the same thing but only put in different words.

The abovementioned qualities as well as others implied are considered sine qua non for a righteous leader and if we really look closely into it, it mainly revolves around a person’s attitude and state of mind. In conclusion to this, having a righteous leader’s presence makes one feel his sense of worth, sense of care and protection. It is rather easy just being a leader in words only than being a true leader. But still, one can become righteous and honorable by learning these qualities as well as sticking up to it. It’s one of the challenges of an honorable leader.

The Most Dangerous Game persuasive essay help: persuasive essay help

“The Most Dangerous Game” Essay Sanger Rainsford and General Zaroff are very alike in some ways. Both want to have the upper hand in an argument or situation. In the beginning of “The Most Dangerous Game”, Zaroff has the upper hand as he knows the terrain and has a threatening bodyguard. He allowed Rainsford to eat and stay at his chateau after he fell overboard. At the end of the story, Rainsford has the upper hand as he won “the game”, surprises Zaroff, and forces Zaroff to play the game he forced himself (Rainsford) to play.

Both men enjoy hunting—although Zaroff savors it in more ways than Rainsford. Rainsford hunts for sport and has less experience. He writes about the animals he hunts, like snow leopards in Tibet. Zaroff has hunted everywhere and hunted everything and yet he says that it no longer thrills him. Altogether, both are expert hunters and both have military experience—Zaroff from being a Cossack and Rainsford from fighting in France in World War 1. In the beginning, Rainsford is the hunted and Zaroff is the hunter.

It twists at the end, with Zaroff being the hunted and Rainsford the hunter. Both would rather be the predator than the prey. Both are very respectful of each other (until Rainsford learns Zaroff is a murderer). They have educated and civilized backgrounds. In the end, it says “In his library he read, to soothe himself, from the works of Marcus Aurelius. ” Aurelius was a Roman emperor and philosopher from the early centuries. Over the entire story, he hums bits from a variety of musicals and operas.

Both Zaroff and Rainsford think that they are right morally, and in today’s age, Zaroff would be the one under ethical discussion. Zaroff thinks that he is an “angel of mercy” by taking what he thinks to be the “scum of the earth” off the earth: lascars, blacks, Chinese, whites, and mongrels, according to him. Rainsford has what Zaroff calls a mid-Victorian point of view; thinking that every life is important and no man should have the power to take another man’s life away from him. There are some obvious differences, though. Zaroff is an older Cossack from

Crimea, Russia. He has hunted since he was five years old. As an adult, he was an officer for the Czar and left Russia after the debacle in 1917. Rainsford is a famous author that writes about game from New York. He is familiar with game, survival techniques, and guns. He is very cultured and finds the fact that Zaroff hunts humans disgusting. Rainsford is extremely strategic and resourceful: he outsmarts Zaroff by hiding right underneath his nose. He ends up killing the bodyguard, Ivan, one of Zaroff’s best dogs; and Zaroff’s chances of taking another life.

10 Hot Issues in It Management free essay help online: free essay help online

Assignment 1: 10 Hot IT Issues Review David C. Johnson Capella University MBA 6180- Managing Information Assets and Technology Professor Danielle Babb February 27, 2011 The author provides a very in depth look at the landscape of the IT environment. He identifies 10 key ideas that members of the IT community will need to focus on in the coming years. Looking at this article from the perspective of a manager and a leader in an IT organization I am inclined to agree with Kanter’s assertions. 1.

Electronic Commerce revisited: the Internet and beyond Despite the fact that the internet bubble burst and there was a time where even thinking about starting a business based on the internet would get you laughed out of a business meeting E-commerce has continued to expand and produce new and better ways of doing business across the world via the internet. The perfect example of this is the increase in online ordering over the holidays. This year I did not set foot in a single store over the holiday season choosing instead to place orders for all of my Christmas gifts through online vendors like amazon. om or my favorite stores online portal. 2. Web services to support internal and external collaboration I whole-heartedly agree with this assertion as well. When I look into the cloud based services like Salesforce. com and the upcoming office 365. The focus of many of our largest software manufacturers has been to deliver full on solutions for business problems over the Internet. Now a day’s customer can check live inventory at vendor locations, we can track delivery times and make changes to our orders on the fly via the internet. Thanks to delivery. om I can now order my lunch via a web app on my Iphone. Things like Google Apps allows for the editing of a single document by multiple participants in real time. How can you be more collaborative then this? The online aspect of this destroys the need for employees to be centrally located. IN many organizations in fact they have begun to do away with the classic office set up. Choosing instead to allow employees to telecommute and having a few flex offices that can be used by multiple individuals who need them when they have to physically be in the office. 3.

Customer Centricity- I think that Customer Centricity is becoming a more prevalent issue in the IT community. Overall I feel like there has been a greater emphasis on human interaction. Where as in the past few years it was 7 number presses to get to a human being I have found that this has changed in the recent times to allow for easier access to customer service reps. 4. Outsourcing and Insourcing the role of Project Management In this economy in particular the mix between having in house expertise and bringing in a consultant for a given task is often close to 50/50.

I know in my organization a great deal of our spend is based on bringing in consultants or outside organization to manage functions that we do not have the capability to manage. This puts added stress and strain on the project management organization that are now tasked with managing an outside organization that they have no real control over. It’s also the PMO’s responsibility to ensure the consultant complies and lives within the confines of the given statement of work or contract. Having this responsibility placed on someone who is not necessarily trained for it has its own consequences and repercussions. . The Value Chain Reaches out If a business is the sum total of a collection of business processes then the supporting processes have to gain fuller advantage and ease of access through better use of Information technology. The Value chain has been the clear winner in the software boon of the past few years. Information management, knowledge management, CRM, Supply chain and thousands of other functions have been improved and to a point automated by the growth of the software market. 6.

IT Infrastructure: Don’t forget telecom and security The infrastructure of an organization has to be flexible enough to allow for growth both in company size and in functionality. Many organizations find themselves in situations where the systems that had worked for them as little as two years ago are no longer sufficient to handle the operations of today. In recent years, I have been with organizations that have had to go through whole scale rebuilding of their storage facilities because they did not account for the number of emails and documents that they would be forced to store to comply with SEC or Sarbanes Oxley regulations.

Knowledge management- In this time when competitive advantage is so hard to come by because everyone can buy the same tools, the combination of knowledge , processes and the people to perform the processes is the only thing that separates one company from another. 7. The Full understanding and positioning of IT in the organization The value of IT in an organization is the difference between an IT department that keeps the computers on and a serious business partner that can be used to enable an organization to thrive. IN many organizations the CIO reports directly to the CEO and has a direct line to a decision maker.

These organizations understand the importance of the CIO position and value it as well. The IT organization is thought of as an integral part of defining and building a business process. 8. Aligning IT strategy with business strategy To restate the obvious in many organizations the job of IT is to support the business function. IT strategy revolves around successfully supplementing business processes with technology. I agree that this is a hot issue in IT but less so in recent years. When the importance of It to the success of an organization has become more viable in the eyes of upper management and the business people in general. . IT still needs people skilled and motivated We are in the information age, as such the importance of the IT organization is only going to grow. We need people who are able to grow and change with the times and the technology. IT will need individuals with a variety of skills to help align IT strategy with Business strategy. I believe this article is directly related to the concept of this course in that it provides some insight into the minds of IT executives and prepares us for what could very well be our future.

The Article focuses on the issues that we as managers will likely be faced with when we begin or continue our careers. In my experience this article hits the nail right on the head. Over the years IT organizations have grown in importance as the complexity of the environments within which we work change and grow. I think the article could have been improved with an update, I think the changes to many organizations since this article has been written could change the authors mind about many.

Job Descriptions easy essay help: easy essay help

Running Head: Staffing Organizations – Job Descriptions Maintaining Job Descriptions Sharon Chambers Strayer University Dr. Annette West July 24 , 2011 Current Issue The InAndOut, Inc. , company provides warehousing and fulfillment services to small publishers of books with small print runs. After the books are printed and bound at a printing facility, they are shipped to InAndOut for handling. The owner and president of InAndOut, Inc. , Alta Fossom is independently wealthy and delegates all day-to-day management matters to the general manager, Marvin Olson.

Alta requires that Marvin clear any new ideas or initiatives with her prior to taking action. The company is growing and Marvin expects to hire new employees within the next year to meet this growth. Job descriptions for the company were originally written by a consultant about eight years ago. They have never been revised and are hopelessly outdated. As a general, Marvin is responsible for all HR management matters. Since Marvin has to clear new projects with Alta, he needs to prepare a brief proposal that can be used to seek approval of new job descriptions. Importance of Job Descriptions

Whether you’re a small business or a large, multi-site organization, well-written employee job descriptions will help you align employee direction. Alignment of the people you employ with your goals, vision, and mission spells success for your organization. As a leader, you assure the interfunctioning of all the different positions and roles needed to get the job done for the customer. According to Susan M. Heathfield, About. com Human Resources Guide, effectively developed, employee job descriptions are communication tools that are significant to your organization’s success.

Poorly written employee job descriptions, on the other hand, add to workplace confusion, hurt communication, and make people feel as if they don’t know what is expected from them. Foster Thomas Blog, Complete HR Solutions states, “it is essential to maintain accurate job descriptions. ” Job descriptions are important both from a legal and practical standpoint. From a practical point of view, job descriptions help the jobholder understand the responsibilities of the position and provide a sense of where the job fits into the company as a whole. From a legal perspective, job descriptions aid in the compliance of several laws.

Job descriptions provide a basis for job evaluation, wage and salary comparison and equitable wage and salary structure (Equal Pay Act). Job descriptions are often used as supporting documentation when it comes to establishing a job’s exempt or non-exempt status (Fair Labor Standards Act). Job descriptions provide a basis from which to determine whether an applicant with a disability is qualified for the job and to determine if any accommodation is required to perform the essential functions of the position (ADA analysis). Outdated job descriptions lead to risky business decisions.

For example, if an employee is terminated because he/she could not perform a job function but that function is not on his/her job description, the company risks a wrongful termination charge. Similarly, if a disabled employee is terminated due to inability to perform an essential job function, but the essential job function is not listed on the description, the employee may claim that he/she was terminated due to his/her disability, not a legitimate business reason. Job Descriptions Format From a format perspective, job descriptions should contain the following sections and statements: Essential duties and responsibilities; * FLSA classification; * Job specifications (i. e. , education requirements, other skills required); * Physical demands, work environment * Job Summary or purpose * Signature and date section for the employee and supervisor. * Physical demands statement: “Reasonable accommodations may be made to enable individuals with disabilities to perform the essential functions. ” Updating Job Descriptions Organizations could undergo restructuring, expansion, downsizing or relocation. Companies, departments and teams change and also business priorities as well as technologies.

This could result in the job functions of employees changing to accommodate the changes in their organizations. Employees might assume new responsibilities or leave out tasks that were not working very well. Such changes should not be ignored and strict adherence to the old job descriptions would be counterproductive to organizational well being. In the event that the job functions of the employees change, it is imperative that their job description change as well. In essence, after writing initial job descriptions, there are a number of good reasons to update them n accordance with the changes taking place in the job functions of the employees (Mader-Clark, 2008). The bottom line is that just as it is important to write new job descriptions when an employer is planning to hire new employees, it is equally important to continually update job descriptions to keep them relevant with the real job functions of employees in the organization ( Mader-Clark, 2008) and (Gan and Kleiner, 2005). Another compelling reason for updating job description is the hiring process would suffer if one were to hire new employees based on obsolete job descriptions.

One of the important factors determining effective recruiting is successful prescreening of applicants. This involves listing of job’s requirement in the advertisement or providing realistic preview of the job during initial call. Job postings using obsolete job descriptions will not attract the right candidate for the job. Job interviews are used to select the candidates for the job. Questions that are asked during selection interviews are structural, behavioral and job related. In order to have predictive validity the questions have to be based on authentic job descriptions. Job analysis has to be arried out and job description written on the actual job duties. Management could encounter legal problems if job offers and employment contracts are prepared on job descriptions that have not been updated (Roberts, 1997; Mader-Clark, 2008). Freeman (1996) and Mader-Clark (2008) have specified a number of reasons to update a job description and these are listed below: * Where a function is added or deleted from the job. * Where someone that is hired possesses new skills that does not track the old description. * Where a higher level of contribution from a position is required, such as a new skill or a body of knowledge. Where there has been a change in the requirements of the job, like a special certificate to carry out the job Process of Developing a Set of Thorough and Current Job Descriptions According to Heneman and Judge (2009), as far as the process of writing new job description or updating existing ones is concerned, it should encompass the following elements: * Defining the need to revise job description format. * Job analysis. * Updating or creating new job descriptions for every classification and making sure that they are premised on current and proper information. Making an assurance that the description meets all legal standards for every position. * Job evaluation. * Updating. The first step here normally concerns making a comprehensive definition of the need to revise the job description format and this is done while using pre-existing information and format as much as possible in order to minimize costs and time as well. Beth Bulger, Director of HR Services (Foster Thomas), advises, a practical way of updating job descriptions is to ask managers to confirm that the job description is up-to-date as part of the performance review rocess. You may also give employees a copy of their job description and ask them to give feedback to their managers. Review all job descriptions on a set schedule, such as during the annual performance review. Conclusion Whether you’re hiring someone new, evaluating a current employee or determining compensation, a job description provides consistency and clarity for everyone involved. Taking the time to write an accurate job description now will save you money, time and energy in the future. References Heathfield, Susan, Employee Job Descriptions: Why Job Descriptions Make Good

Business Sense. Retrieved July 24, 2011 from About. com: http://humanresources. about. com/od/glossaryj/a/jobdescriptions. htm Bulger, Beth, The importance of Accurate Job Descriptions. Retrieved July 24, 2011 from Foster Thomas Blog. com: http://www. fosterthomas. com/blog/bid/33742/The-importance-of-Accurate-Job-Descriptions Farnham, D. (2000). Developing and implementing competence-based recruitment and selection in a social services department – A case study of West Sussex County Council. International Journal of Public Sector Management,13(4), 369-382

Gan, M. and Kleiner, B. (2005). How to Write Job Descriptions Effectively. Journal of Management Research News, 28(8), 48-54 Dessler, G. (2008). Human Resource Management. New Jersey: New Jersey. Heneman H. and Judge T. (2009). Staffing Organizations 6th Edition, Middleton, WI: McGraw Hill International Edition Mader-Clark, M. (2008). The Job Description Handbook- Everything You Need To Write Effective Job Descriptions- And Avoid Legal Pitfalls 2nd Edition. San Francisco: Nolo. Roberts, G. (1997). Recruitment and Selection: A Competency Approach. London:

Why Prison’s Don’t Work college application essay help online: college application essay help online

Critical Response Ashley Dalton Hawaii Pacific University Critical Response to “Why Prisons Don’t Work” By Wilbert Rideau The prison system is a topic that is widely debated. Many are either for or against how they are ran. Though I am only an observer; I have no ties to the prison system. I do agree with many points that Wilbert Rideau made in his original article. What caught my eye was that Mr. Rideau was in the Louisiana State Penitentiary in 1962. He describes the kind of prisoners that were typically brought there. He goes on about his opinions and observations “that permanently exiling people to prison will make society safe” (10).

Mr. Rideau goes on to say that prison is not a cure-all. He describes what prisons do as “isolating young criminals long enough to them a chance to grow up” (31). I agree when he says that prison should only be a temporary arrangement, not a way of life. As well as many criminals are kept there for too long making the prison a way of life and not allowing them to readjust to normal society. The prisoners are potentially being held hostage longer than rehabilitation should allow. Mr. Rideau makes a point that because of mandatory sentences prisoners are much older.

He states “rather than pay for new prisons, society would be well served by releasing some of its older prisoners who pose no threat and using the money to catch young street thugs” (41). Think about it. A fifty, sixty, or even seventy year old prisoner doesn’t necessarily pose a major threat to society; but, the younger criminals on the streets do. It shouldn’t take thirty, forty, or more years to rehabilitate someone. However, there are prisoners who are serial killers, rapists, and worse that do deserve to rot in prison. Prison times and sentences are decided by politicians and not necessarily the penal professionals.

I don’t necessarily agree with Mr. Rideau when he states “even murderers, those more feared by society, pose little risk. What if those murderers have children and instill their views on society to them? Many are brainwashed my parental figures such as the KKK who force them to grow up in that environment. So they grow up wanting to please mom and dad rather than make their own opinions. Henceforth, they end up following in their footsteps wherever that may lead. Mr. Rideau then makes a valuable point that rehabilitation can work. I agree with this.

He brings to attention that the Louisiana State Penitentiary houses around 4,600 prisoners and offers academic training to approximately 240. With tax cuts, maybe that’s all they can do but education whether it is academic or just about life’s choices is a necessary part in rehabilitating anyone. Wilbert Rideau has many valuable points about prison life that may help educate society. His opinions suggest that not all sentences are helpful in a prisoner’s rehabilitation. All prisoners, depending on the severity of their crimes, should be allowed a chance to prove themselves again under proper supervision.

Csr in the Alcohol Industry cheap essay help: cheap essay help

Why do CSR and the alcohol industry seem to be incompatible? Corporate social responsibility is a form of corporate self-regulation by which companies take into account the impact of their activities on the environment, consumers and all the members of the public sphere. But how is it possible to include public interest into corporate decision-making in the alcohol industry? * What does Corporate Social responsibility mean? Corporate Social Responsibility is about how companies manage the business processes to build an overall positive impact on society.

According to “Making Good Business Sense” by Lord Holme and Richard Watts and published by the World Business Council for Sustainable Development, “Corporate Social Responsibility is the continuing commitment by business to behave ethically and contribute to economic development while improving the quality of life of the workforce and their families as well as of the local community and society at large”. If the other aspects of corporate social responsibility are about doing what you do right, then the marketplace issues are about doing the right thing.

Doing the right thing can be the single most important aspect of the business of a company in terms of securing its longer-term viability. In this perspective, companies have to evaluate the costs they impose on society and to approach the selling process with integrity and honesty, trying to operate in an ethical way. The main aim of this type of policy is to satisfy the growing demand of the consumers not only on quality and price but also on brand values that match with their own. The reputation of a brand is tightly related to this. There is also a benefit for investment.

An increasing number of investment companies look for safe investments and define these in terms of good management of intangibles. This is related to the growing importance of socially responsible investment that excludes shares in companies operating unethically. A company, which is willing to implement corporate social responsibility policies, must be aware of the issues it may be confronted in the marketplace. The impact on the society of its core products should be positive, the firm should advert and trade in an ethical way, but it also hould fairly treat it suppliers fairly. One of the main principles of CSR in the marketplace is respecting customers and supporting vulnerable ones. * The alcohol industry has an obvious overall negative impact on society Liquor companies mainly produce and sell drinks that have for long been considered as harmfull and addictive. Mostly major health problems related to alcohol consumption are caused due to chronic drinking. Alcohol affects the brain and central nervous system as it slows down the reaction of the drinker.

High dosage of alcohol can lead to aggressive behaviours especially when chronic. As a consequence, in the USA, alcohol is partly responsible for more than 66% of murders. Besides risk of cancer increases with habitual drinking of alcohol. One of the primary targets of cancer is liver. Heavy drinking of alcohol can also lead to anaemia or gout. As we can see in the map above alcohol consumption is one of the main causes of death worldwide. But this is not the only reason why liquor companies are perceived as socially unacceptable industries.

They are also criticised due to their marketing policies. Indeed, in the US, they are for instance accused of targeting minority group in their marketing campaigns and by such, contributing to the perpetuation of racism. But, the main issue concerns the campaigns targeting young people. This is a very profitable and buoyant market for them. Thus, despite the alcohol industry’s claims that it does not advertise to underage youth, young people are consistently exposed to and affected by alcohol marketing.

This exposure increases underage drinking, promotes brand awareness and influences youth attitudes about drinking. Alcohol is by far the most used and abused drug among America’s teenagers. According to a national survey, nearly one third (31. 5%) of all high school students reported hazardous drinking (5+ drinks in one setting) during the 30 days preceding the Survey (Youth Risk Behavior Surveillance – United States, 1999) . Children who are drinking alcohol before 15 are more likely to report academic problems, substance use, and delinquent behaviour in both middle school and high school.

By young adulthood, early alcohol use was associated with employment problems, other substance abuse, and criminal and other violent behaviour. As forms of “new media” emerge and become more sophisticated, alcohol companies are among the first to take advantage of these new marketing opportunities. Coupled with lax age verification, many alcohol companies have designed their Web sites in a way that appeals to youth. Budlight. com, for example, is full of interactive features that have a broad appeal to teens. Visitors can play games, listen to music, watch and rate Bud Light ads, and send Bud Light emails to friends.

There are also a number of items that can be downloaded, including alcohol branded desktop wallpaper, instant messaging icons, and screensavers. Therefore, this industry is strongly regulated by States as it is seen as a social plague. In this perspective, alcohol producers can hardly promote their social responsibility. They are often criticized for their targeting marketing on young people. Their main goal is thus to maximise profits by selling “sin” and, promoting the well being of their consumers is fundamentally contrary to this goal.

Twelfth Night Summary extended essay help biology: extended essay help biology

“If this be so, as yet the glass seems true. I shall have shone in this most happy wrack. Bay thou hast said to me a thousand times thouse never shouldst love women like me” Orsino is a powerful nobal man in the country of illeryia. Orsino is love sick for the beautiful lady olivia. But becomes more and more fond of his handsome new page boy, Cesario. Orsino mopes around complaining how heart sick he is over olivia, when it is clear that he loves to be in love. Olivia is a wealthy noble illyria lady. Who is courted by orsino and sir Andrew. But insist she is morning for her brother, who has recently died.

And will not marry for seven years. Orsino, the Duke of lllyria, is in love with his neighbour, the Countess Olivia. She has sworn to avoid men’s company for seven years while she mourns the death of her brother, so rejects him. Nearby a group of sailors arrive on shore with a young woman, Viola, who has survived a shipwreck in a storm at sea. Viola mourns the loss of her twin brother but decides to dress as a boy to get work as a page to Duke Orsino. Despite his rejection Orsino sends his new page Cesario (Viola in disguise) to woo Olivia on his behalf. Viola goes unwillingly as she has already fallen in love at first sight with the duke.

Olivia is attracted by the ‘boy’ and she sends her pompous steward, Malvolio, after him with a ring. Olivia’s uncle, Sir Toby Belch, her servant Maria, and Sir Toby’s friend, Sir Andrew Aguecheek, who is also hoping to woo Olivia, and is being led on by Sir Toby, who is trying to fleece him of his money, all plot to expose the self-love of Malvolio. By means of a false letter they trick him into thinking his mistress Olivia loves him. Malvolio appears in yellow stockings and cross-garters, smiling as they have told him to in the letter. Unaware of the trick the Countess is horrified and has Malvolio shut up in the dark as a madman.

Meanwhile Viola’s twin brother, Sebastian, who has also survived the shipwreck, comes to Illyria. His sea-captain friend, Antonio, is a wanted man for piracy against Orsino. The resemblance between Cesario and Sebastian leads the jealous Sir Andrew to challenge Cesario to a duel. Antonio intervenes to defend Cesario whom he thinks is his friend Sebastian, and is arrested. Olivia has in the meantime met and become betrothed to Sebastian. Cesario is accused of deserting both Antonio and Olivia when the real Sebastianarrives to apologise for fighting Sir Toby.

Seeing both twins together, all is revealed to Olivia. Orsino’s fool, Feste, brings a letter from Malvolio and on his release the conspirators confess to having written the false letter. Malvolio departs promising revenge. Maria and Sir Toby have married in celebration of the success of their device against the steward. The play ends as Orsino welcomes Olivia and Sebastian and, realising his own attraction to Cesario, he promises that once she is dressed as a woman again they, too, will be married. “If music be the food of love, play on. “

Economic Forecast Paper college essay help service: college essay help service

The US economy is expected to grow at a really slow pace given the fiscal outlook and government cuts in spending. It is unlikely that there will be a government stimulus package in 2012 and the reason being a divisive politics in congress and also the piling up of the public debt. Fiscal policy in the past years helped to stimulate the economy especially after the inauguration of Barack Obama. Obama signed into law 787 billion dollar stimulus package in 2009 and helped the economy to gain a boost.

The following year the economy stayed kind neutral and in 2011 it was slowing down and losing points gained in the previous years. In 2012 economists are looking forward to the decisions made by government. If the Obama stimulus package that’s about creating jobs, cutting taxes on the middle class people and taxing the rich more will help to neutralize the economy. But that’s unlikely and it will have dragged the current economy to its lowest in three years. The European Debt Crisis The Economic health of Europe is important to the prosperity of US. Europe’s ills already have damaged some U.

S. interests, from multinational companies to major exporters. Individual investors have many reasons for concern, as the enthusiasm from earlier debt agreements has given way to pessimism and stock market dives. If the U. S. economy takes such a turn into 2012, Europe’s financial troubles could wind up affecting the U. S. presidential election. Big American banks have outstanding loans of about $ 700 billion in Europe and in the case of a default that means a disaster for both Europe and The US. More than 20% of all U. S. exports go to Europe, making it the nation’s largest trading partner.

About 14% go to the 17 Eurozone countries. The real worry for U. S. business is that financial panic might cause a broad recession throughout the Eurozone, quelling the appetites of French and German consumers and businesses for U. S. products. Inflation Forecast According to Forecast chart “ForecastChart. com is forecasting that US Inflation Rates will be roughly 3. 04% over the next year. ” The table from Forecast. com shows a HDTFA of 1. 16% which suggests that US inflation for the 12 months ending November, 2012 could easily fall between 4. 21% and 1. 88%. Annual Inflation Rates

There are three components of core inflation that have disproportionally caused the rate to be higher than the Federal Reserve Bank’s long-term target: 1. Renting homes instead of buying In spite of ample existing home supply, many potential buyers are having trouble obtaining mortgages, and with some homeowners in foreclosure, both populations are being pushed into the rental market. I believe this is a genuine cause for concern going forward. 2. Motor Vehicles Prices have moved higher due to supply chain disruptions from the earthquake in Japan earlier this year.

The tragic events that followed reduced supply and increased pricing power for manufacturers, but I believe that dynamic will gradually be unwound over coming months as production increases and stabilize prices. 3. Apparel prices A temporary surge in commodity prices and higher wages in Asia where the majority of clothing is manufactured has led to higher prices, which we expect to ease somewhat in the coming months. Source: Department of Labor. GS Global ECS Research. Forecast 2007 2008 2009 2010 2011 2012 -0. 5% 0. 0% 0. 5% 1. 0% 1. 5% 2. 0% 2. 5% 3. 0% Commodity-related

Vehicles Rent Other Core Unemployment I believe the unemployment rate will remain high that is above 9% if current economic situation remains the same. According to US Bureau of Labor Statistics data released in November, the unemployment rate declined by 0. 4 percentage point to 8. 6 percent. From April through October, the rate held in a narrow range from 9. 0 to 9. 2 percent. The number of unemployed persons, at 13. 3 million, was down by 594,000 in November. The labor force, which is the sum of the unemployed and employed, was down by a little more than half that amount.

But this data is not really accurate because the Bureau only counts people who are actively looking for a job and do not count the ones who have given up looking for a job. So in reality the unemployment rate is hovering at around 14-16%. Interest Rates Interest is expected to remain low to encourage private borrowing. After promising to keep a key short-term interest rate near zero at least through the middle of 2013, the Federal Reserve is trying to lower long-term rates, already at record lows. That will keep a lead on borrowing costs for a while, but it won’t do much to help the economy.

Commercial banks will keep their prime lending rate at 3. 25% into 2013. The 10-year Treasury note, a benchmark for mortgage rates and corporate bonds, should remain near its current rate of 2% until growth picks up, which won’t be sooner than mid-2012. According to the Federal Reserve Bank, the Fed’s method of lowering long-term rates is a plan to sell $400 billion worth of short-term debt and buy Treasury notes and bonds with maturities of six to 30 years. Direct Foreign Investment Foreign investment flows will remain disappointing through 2012, according to the 2011 A.

T. Kearney Foreign Direct Investment Confidence Index, a regular assessment of senior executive sentiment at the world’s largest companies. The Index also found executives are wary of making investments in the current economic climate and revealed that they expect the economic turnaround to happen no earlier than 2012. Half of the companies surveyed also report that they are postponing investments as a result of market uncertainty and difficulties in obtaining credit. Summary Things are looking worse for the U. S. economy than even three months ago.

Since August, forecasters have revised their outlook to predict more gloom than they had expected, according to a new survey of 45 forecasters by the Federal Reserve Bank of Philadelphia. On average, economic forecasters predict real GDP growth of 2. 4 percent in 2012, down from 2. 6 percent in August, and the 2012 unemployment rate to be 8. 8 percent, compared with 8. 4 percent in November. Their predictions for 2012 and 2013 are also lower: just 2. 7 percent and 3. 5 percent, respectively. And that’s still higher than what the Fed itself is projecting, with a growth forecast of 2. 4 to 2. 7 percent for 2013.

Factors Related to the Academic Performance rice supplement essay help: rice supplement essay help

Academic performance finds its way in all aspects of life. It plays a vital role in the achievement and progress of one’s life. It is a means to have a great access for better careers. Over the past decade, there has been a decline in the quality of academic achievement of student entering college (McDonald, 1993) Most instructors are complaining that these students are not that well prepared with regard to specific knowledge and study strategies (Briggs, et.

Al. 1993) the desire to raise academic performance for students to acquire other competencies creates substantial challenges for educators (Stasz & Brewer, 1998) An understanding of hindrances to the performances of one’s individual is a key to success. It may pursue a significant result or may add into its development. The study of Ceniza (1986) proved the impact of socio-economic and cultural development in the scholastic performance of students.

I n her study, she pointed out that socio-economic status of parents of the students has a direct bearing on the educational achievement of the students. Those who are in the upper socio-economic bracket have achieved better scholastic achievement. Academic performance is not only dependent upon the student’s intelligence but is also enhanced by some factors related to performance in the school. Moreover, the performance Of students is associated with varied factors.

The Dee Hwa Liong College Foundation as an educational institution firmly believes that education is a very important instrument in the improvement of quality of lives of people. everyone who seeks entrance to a certain college expects that this institution will provide the training from where an economically stable society takes off and develops. Since every school of learning aims to assist students, improve academic performance.

The Dee Hwa Liong College Foundation envisioned to become a center for educational services to the youth and to develop them to the fullest by equipping them with knowledge, skills, value and attitudes necessary for their active and participation in a just and humane society. Its responsibility to help the students improve their academic performance to insure success in life is its major commitment . It is along this line that a research is conducted to help school find ways to improve academic performance. This will also help promote a good image for the school.

Behavioural Finance cheap essay help: cheap essay help

The occurrence of stock market bubbles and crashes is often cited as evidence against the efficient market hypothesis. It is argued that new information is rarely, if ever, capable of explaining the sudden and dramatic share price movements observed during bubbles and crashes. Samuelson (1998) distinguished between micro efficiency and macro efficiency. Samuelson took the view that major stock markets are micro efficient in the sense that stocks are (nearly) correctly priced relative to each other, whereas the stock markets are macro inefficient.

Macro inefficiency means that prices, at the aggregate level, can deviate from fair values over time. Jung and Shiller (2002) concurred with Samuelson’s view and suggested that waves of over- and undervaluation occur for the aggregate market over time. Stock markets are seen as having some predictability in the aggregate and over the long runBubbles and crashes have a history that goes back at least to the seventeenth century (MacKay 1852). Some writers have suggested that bubbles show common characteristics.

Band (1989) said that market tops exhibited the following features: 1. Prices have risen dramatically. 2. Widespread rejection of the conventional methods of share valuation, and the emergence of new ‘theories’ to explain why share prices should be much higher than the conventional methods would indicate. 3. Proliferation of investment schemes offering very high returns very quickly. 4. Intense, and temporarily successful, speculation by uninformed investors. 5. Popular enthusiasm for leveraged (geared) investments. 6. Selling by corporate insiders, and other long-term investors.

Extremely high trading volume in shares. Kindleberger (1989) and Kindleberger and Aliber (2005) argued that most bubbles and crashes have common characteristics. Bubbles feature large and rapid price increases, which result in share prices rising to unrealistically high levels. Bubbles typically begin with a justifiable rise in stock prices. The justification may be a technological advance, or a general rise in prosperity. Examples of technological advance stimulating share price rises might include the development of the automobile and radio in the 1920s and the emergence of the Internet in the late 1990s.

Examples of increasing prosperity leading to price rises could be the United States,Western Europe, and Japan in the 1980s. Cassidy (2002) suggested that this initial stage is characterised by a new idea or product causing changes in expectations about the future. Early investors in companies involved with the innovation make very high returns, which attract the attention of others. The rise in share prices, if substantial and prolonged, leads to members of the public believing that prices will continue to rise.

People who do not normally invest begin to buy shares in the belief that prices will continue to rise. More and more people, typically people who have no knowledge of financial markets, buy shares. This pushes up prices even further. There is euphoria and manic buying. This causes further price rises. There is a self-fulfilling prophecy wherein the belief that prices will rise brings about the rise, since it leads to buying. People with no knowledge of investment often believe that if share prices have risen recently, those prices will continue to rise in the future.

Cassidy (2002) divides this process into a boom stage and a euphoria stage. In the boom stage share price rises generate media interest, which spreads the excitement across a wider audience. Even the professionals working for institutional investors become involved. In the euphoria stage investment principles, and even common sense, are discarded. Conventional wisdom is rejected in favour of the view that it is ‘all different this time’. Prices lose touch with reality. One assumption of the efficient market hypothesis is that investors are rational.

This does not require all investors to be rational, but it does require that the rational investors outweigh the irrational ones. However there are times when irrational investors are dominant. A possible cause of market overreaction is the tendency of some investors (often small investors) to follow the market. Such investors believe that recent stock price movements are indicators of future price movements. In other words they extrapolate price movements. They buy when prices have been rising and thereby tend to push prices to unrealistically high levels.

They sell when prices have been falling and thereby drive prices to excessively low levels. There are times when such naive investors outweigh those that invest on the basis of fundamental analysis of the intrinsic value of the shares. Such irrational investors help to generate bubbles and crashes in stock markets. Some professional investors may also participate on the basis of the greater fool theory. The greater fool theory states that it does not matter if the price paid is higher than the fundamental value, so long as someone (the greater fool) will be prepared to pay an even higher price.

The theory of rational bubbles suggests that investors weigh the probability of further rises against the probability of falls. So it may be rational for an investor to buy shares, knowing that they are overvalued, if the probability-weighted expectation of gain exceeds the probability-weighted expectation of loss. Montier (2002) offers Keynes’s (1936) beauty contest as an explanation of stock market bubbles. The first level of the contest is to choose the stocks that you believe to offer the best prospects. The second level is to choose stocks that you believe others will see as offering the best prospects.

A third level is to choose the stocks that you believe that others will expect the average investor to select. A fourth stage might involve choosing stocks that you believe that others will expect the average investor to see as most popular amongst investors. In other words, the beauty contest view sees investors as indulging in levels of second-guessing other investors. Even if every investor believes that a stock market crash is coming they may not sell stocks. They may even continue to buy. They may plan to sell just before others sell.

In this way they expect to maximise their profits from the rising market. The result is that markets continue to rise beyond what the vast majority of investors would consider to be the values consistent with economic fundamentals. It is interesting to note that Shiller’s survey following the 1987 crash (Shiller 1987) found that 84% of institutional investors and 72% of private investors said that they had believed that the market was overpriced just before the crash. Shiller suggested that people did not realise how many others shared their views that the market was overpriced.

As Hirshleifer (2001) points out, people have a tendency to conform to the judgements and behaviours of others. People may follow others without any apparent reason. Such behaviour results in a form of herding, which helps to explain the development of bubbles and crashes. If there is a uniformity of view concerning the direction of a market, the result is likely to be a movement of the market in that direction. Furthermore, the herd may stampede. Shiller (2000) said that the meaning of herd behaviour is that investors tend to do as other investors do.

They imitate the behaviour of others and disregard their own information. Brown (1999) examined the effect of noise traders (non-professionals with no special information) on the volatility of the prices of closed-end funds (investment trusts). A shift in sentiment entailed these investors moving together and an increase in price volatility resulted. Walter and Weber (2006) found herding to be present among managers of mutual funds. Walter and Weber (2006) distinguished between intentional and unintentional herding. Intentional herding was seen as arising from attempts to copy others.

Unintentional herding emerges as a result of investors analysing the same information in the same way. Intentional herding could develop as a consequence of poor availability of information. Investors might copy the behaviour of others in the belief that those others have traded on the basis of information. When copying others in the belief that they are acting on information becomes widespread, there is an informational cascade. Another possible cause of intentional herding arises as a consequence of career risk. If a fund manager loses money whilst others make money, that fund manager’s job may be in jeopardy.

If a fund manager loses money whilst others lose money, there is more job security. So it can be in the fund manager’s interests to do as others do (this is sometimes referred to as the reputational reason for herding). Since fund managers are often evaluated in relation to benchmarks based on the average performance of fund managers, or based on stock indices, there could be an incentive to copy others since that would prevent substantial underperformance relative to the benchmark. Walter and Weber (2006) found positive feedback trading by mutual fund managers.

In other words the managers bought stocks following price rises and sold following falls. If such momentum trading is common, it could be a cause of unintentional herding. Investors do the same thing because they are following the same strategy. It can be difficult to know whether observed herding is intentional or unintentional. Hwang and Salmon (2006) investigated herding in the sense that investors, following the performance of the market as a whole, buy or sell simultaneously. Investigating in the United States, the UK, and South Korea they found that herding increases with market sentiment.

They found that herding occurs to a greater extent when investor expectations are relatively homogeneous. Herding is strongest when there is confidence about the direction in which the market is heading. Herding appeared to be persistent and slow moving. This is consistent with the observation that some bubbles have taken years to develop. Kirman (1991) suggests that investors may not necessarily base decisions on their own views about investments but upon what they see as the majority view. The majority being followed are not necessarily well-informed rational investors.

The investors that are followed may be uninformed and subject to psychological biases that render their behaviour irrational (from the perspective of economists). Rational investors may even focus on predicting the behaviour of irrational investors rather than trying to ascertain fundamental value (this may explain the popularity of technical analysis among market professionals). There are theories of the diffusion of information based on models of epidemics. In such models there are ‘carriers’ who meet ‘susceptibles’ (Shiller 1989).

Stock market (and property market) bubbles and crashes are likened to the spread of epidemics. There is evidence that ideas can remain dormant for long periods and then be triggered by an apparently trivial event. Face-to-face communication appears to be dominant, but the media also plays a role. Cassidy (2002) suggested that people want to become players in an ongoing drama in which ownership of stocks gives them a sense of being part of a social movement. People invest because they do not want to be left out of the exciting developments.

The media are an integral part of market events because they want to attract viewers and readers. Generally, significant market events occur only if there is similar thinking among large groups of people, and the news media are vehicles for the spreading of ideas. The news media are attracted to financial markets because there is a persistent flow of news in the form of daily price changes and company reports. The media seek interesting news. The media can be fundamental propagators of speculative price movements through their efforts to make news interesting (Shiller 2000).

They may try to enhance interest by attaching news stories to stock price movements, thereby focusing greater attention on the movements. The media are also prone to focus attention on particular stories for long periods. Shiller refers to this as an ‘attention cascade’. Attention cascades can contribute to stock market bubbles and crashes. Davis (2006) confirmed the role of the media in the development of extreme market movements. The media were found to exaggerate market responses to news, and to magnify irrational market expectations.

At times of market crisis the media can push trading activity to extremes. The media can trigger and reinforce opinions. It has been suggested that memes may play a part in the process by which ideas spread (Lynch 2001). Memes are contagious ideas. It has been suggested that the success of a meme depends upon three critical factors: transmissivity, receptivity, and longevity. Transmissivity is the amount of dissemination from those with the idea. Receptivity concerns how believable, or acceptable, the idea is. Longevity relates to how long investors keep the idea in mind.

Smith (1991) put forward the view that bubbles and crashes seem to have their origin in social influences. Social influence may mean following a leader, reacting simultaneously and identically with other investors in response to new information, or imitation of others who are either directly observed or observed indirectly through the media. Social influence appears to be strongest when an individual feels uncertain and finds no directly applicable earlier personal experience. Deutsch and Gerard (1955) distinguish between ‘normative social influence’ and ‘informational social influence’.

Normative social influence does not involve a change in perceptions or beliefs, merely conformity for the benefits of conformity. An example of normative social influence would be that of professional investment managers who copy each other on the grounds that being wrong when everyone else is wrong does not jeopardise one’s career, but being wrong when the majority get it right can result in job loss. This is a form of regret avoidance. If a bad decision were made, a result would be the pain of regret. By following the decisions of others, the risk of regret is reduced. This is safety in numbers.

There is less fear of regret when others are making the same decisions. Informational social influence entails acceptance of a group’s beliefs as providing information. For example share purchases by others delivers information that they believe that prices will rise in future. This is accepted as useful information about the stock market and leads others to buy also. This is an informational cascade; people see the actions of others as providing information and act on that information. Investors buy because they know that others are buying, and in buying provide information to other investors who buy in their turn.

Informational cascades can cause large, and economically unjustified, swings in stock market levels. Investors cease to make their own judgements based on factual information, and use the apparent information conveyed by the actions of others instead. Investment decisions based on relevant information cease, and hence the process whereby stock prices come to reflect relevant information comes to an end. Share price movements come to be disconnected from relevant information. Both of the types of social influence identified by Deutsch and Gerard (1955) can lead to positive feedback trading.

Positive feedback trading involves buying because prices have been rising and selling when prices have been falling, since price movements are seen as providing information about the views of other investors. Buying pushes prices yet higher (and thereby stimulates more buying) and selling pushes prices lower (and hence encourages more selling). Such trading behaviour contributes to stock market bubbles and crashes. People in a peer group tend to develop the same tastes, interests, and opinions (Ellison and Fudenberg 1993). Social norms emerge in relation to shared beliefs.

These social norms include beliefs about investing. The social environment of an investor influences investment decisions. This applies not only to individual investors, but also to market professionals. Fund managers are a peer group; fundamental analysts are a peer group; technical analysts are a peer group. Indeed market professionals in aggregate form a peer group. It is likely that there are times when these peer groups develop common beliefs about the direction of the stock market. Common beliefs tend to engender stock market bubbles and crashes. Welch (2000) investigated herding among investment analysts.

Herding was seen as occurring when analysts appeared to mimic the recommendations of other analysts. It was found that there was herding towards the prevailing consensus, and towards recent revisions of the forecasts of other analysts. A conclusion of the research was that in bull markets the rise in share prices would be reinforced by herding. Research on investor psychology has indicated certain features about the behaviour of uninformed investors, who are often referred to as noise traders in the academic literature. Tversky and Kahneman (1982) found that they have a tendency to overreact to news.

DeBondt (1993) found that they extrapolate trends, in other words they tend to believe that the recent direction of movement of share prices will continue. Shleifer and Summers (1990) found evidence that they become overconfident in their forecasts. This latter point is consistent with the view that bubbles and crashes are characterised by some investors forgetting that financial markets are uncertain, and coming to believe that the direction of movement of share prices can be forecast with certainty. Barberis et al. (1998) suggested that noise traders, as a result of misinterpretation of information, see patterns where there are none.

Lee (1998) mentioned that a sudden and drastic trend reversal may mean that earlier cues of a change in trend had been neglected. Clarke and Statman (1998) found that noise traders tend to follow newsletters, which in turn are prone to herding. It seems that many investors not only extrapolate price trends but also extrapolate streams of good or bad news, for example a succession of pieces of good news leads to the expectation that future news will also be good. Barberis et al. (1998) showed that shares that had experienced a succession of positive items of news tended to become overpriced.

This indicates that stock prices overreact to consistent patterns of good or bad news. Lakonishok et al. (1994) concluded that investors appeared to extrapolate the past too far into the future. There is evidence that the flow of money into institutional investment funds (such as unit trusts) has an impact on stock market movements. Evidence for a positive relationship between fund flows and subsequent stock market returns comes from Edelen and Warner (2001), Neal and Wheatley (1998), Randall et al. (2003), and Warther (1995). It has been suggested by Indro (2004) that market sentiment (an aspect of crowd psychology) plays an important role.

Indro found that poll-based measures of market sentiment were related to the size of net inflows into equity funds. It appears that improved sentiment (optimism) generates investment into institutional funds, which in turn brings about a rise in stock market prices (and vice versa for increased pessimism). If stock market rises render market sentiment more optimistic, a circular process occurs in which rising prices and improving sentiment reinforce each other. It has often been suggested that small investors have a tendency to buy when the market has risen and to sell when the market falls.

Karceski (2002) reported that between 1984 and 1996 average monthly inflows into US equity mutual funds were about eight times higher in bull markets than in bear markets. The largest inflows were found to occur after the market had moved higher and the smallest inflows followed falls. Mosebach and Najand (1999) found interrelationships between stock market rises and flows of funds into the market. Rises in the market were related to its own previous rises, indicating a momentum effect, and to previous cash inflows to the market. Cash inflows also showed momentum, and were related to previous market rises.

A high net inflow of funds increased stock market prices, and price rises increased the net inflow of funds. In other words, positive feedback trading was identified. This buy high/sell low investment strategy may be predicted by the ‘house money’ and ‘snake bite’ effects (Thaler and Johnson 1990). After making a gain people are willing to take risks with the winnings since they do not fully regard the money gained as their own (it is the ‘house money’). So people may be more willing to buy following a price rise. Conversely the ‘snake bite’ effect renders people more risk-averse following a loss.

The pain of a loss (the snake bite) can cause people to avoid the risk of more loss by selling investments seen as risky. When many investors are affected by these biases, the market as a whole may be affected. The house money effect can contribute to the emergence of a stock market bubble. The snake bite effect can contribute to a crash. The tendency to buy following a stock market rise, and to sell following a fall, can also be explained in terms of changes in attitude towards risk. Clarke and Statman (1998) reported that risk tolerance fell dramatically just after the stock market crash of 1987.

In consequence investors became less willing to invest in the stock market after the crash. MacKillop (2003) and Yao et al. (2004) found a relationship between market prices and risk tolerance. The findings were that investors became more tolerant of risk following market rises, and less risk tolerant following falls. The implication is that people are more inclined to buy shares when markets have been rising and more inclined to sell when they have been falling; behaviour which reinforces the direction of market movement. Shefrin (2000) found similar effects among financial advisers and institutional investors. Grable et al. 2004) found a positive relationship between stock market closing prices and risk tolerance. As the previous week’s closing price increased, risk tolerance increased. When the market dropped, the following week’s risk tolerance also dropped. Since risk tolerance affects the willingness of investors to buy risky assets such as shares, the relationship between market movements and risk tolerance tends to reinforce the direction of market movement. During market rises people become more inclined to buy shares, thus pushing share prices up further. After market falls investors are more likely to sell, thereby pushing the market down further.

Projection bias is high sensitivity to momentary information and feelings such that current attitudes and preferences are expected to continue into the future (Loewenstein et al. 2003). Mehra and Sah (2002) found that risk tolerance varied over time and that people behaved as if their current risk preference would persist into the future. In other words the current level of risk tolerance was subject to a projection bias such that it was expected to continue into the future. Grable et al. (2006) pointed out that this interacts with the effects of market movements on risk tolerance.

A rise in the market enhances risk tolerance, projection bias leads to a belief that current risk tolerance will persist, people buy more shares, share purchases cause price rises, price rises increase risk tolerance, and so forth. A virtuous circle of rising prices and rising risk tolerance could emerge. Conversely there could be a vicious circle entailing falling prices and rising risk-aversion. The Role of Social Mood People transmit moods to one another when interacting socially. People not only receive information and opinions in the process of social interaction, they also receive moods and emotions.

Moods and emotions interact with cognitive processes when people make decisions. There are times when such feelings can be particularly important, such as in periods of uncertainty and when the decision is very complex. The moods and emotions may be unrelated to a decision, but nonetheless affect the decision. The general level of optimism or pessimism in society will influence individuals and their decisions, including their financial decisions There is a distinction between emotions and moods. Emotions are often short term and tend to be related to a particular person, object, or situation.

Moods are free-floating and not attached to something specific. A mood is a general state of mind and can persist for long periods. Mood may have no particular causal stimulus and have no particular target. Positive mood is accompanied by emotions such as optimism, happiness, and hope. These feelings can become extreme and result in euphoria. Negative mood is associated with emotions such as fear, pessimism, and antagonism. Nofsinger (2005a) suggested that social mood is quickly reflected in the stock market, such that the stock market becomes an indicator of social mood.

Prechter (1999, 2001), in proposing a socionomics hypothesis, argued that moods cause financial market trends and contribute to a tendency for investors to act in a concerted manner and to exhibit herding behaviour. Many psychologists would argue that actions are driven by what people think, which is heavily influenced by how they feel. How people feel is partly determined by their interactions with others. Prechter’s socionomic hypothesis suggests that human interactions spread moods and emotions. When moods and emotions become widely shared, the resulting feelings of optimism or pessimism cause uniformity in financial decision-making.

This amounts to herding and has impacts on financial markets at the aggregate level. Slovic et al. (2002) proposed an affect heuristic. Affect refers to feelings, which are subtle and of which people may be unaware. Impressions and feelings based on affect are often easier bases for decision-making than an objective evaluation, particularly when the decision is complex. Since the use of affect in decision-making is a form of short cut, it could be regarded as a heuristic. Loewenstein et al. (2001) showed how emotions interact with cognitive thought processes and how at times the emotional process can dominate cognitive processes.

Forgas (1995) took the view that the role of emotions increased as the complexity and uncertainty facing the decision-maker increased. Information can spread through society in a number of ways: books, magazines, newspapers, television, radio, the Internet, and personal contact. Nofsinger suggests that personal contact is particularly important since it readily conveys mood and emotion as well as information. Interpersonal contact is important to the propagation of social mood. Such contact results in shared mood as well as shared information.

Prechter suggested that economic expansions and equity bull markets are associated with positive feelings such as optimism and enthusiasm whereas economic recessions and bear markets correspond to an increase in negative emotions like pessimism, fear, and anger. During a stock market uptrend society and investors are characterised by feelings of calmness and contentment, at the market top they are happy and enthusiastic, during the market downturn the feelings are ones of sadness and insecurity, whilst the market bottom is associated with feelings of anger, hostility, and tension.

Dreman (2001) suggested that at the peaks and troughs of social mood, characterised by manias and panics, psychological influences play the biggest role in the decisions of investment analysts and fund managers. Forecasts will be the most positive at the peak of social mood and most negative at the troughs. Psychological influences can contaminate rational decision-making, and may be dominant at the extreme highs and lows of social mood. At the extremes of social mood the traditional techniques of investment analysis might be rejected by many as being no longer applicable in the new era.

Shiller (1984) took the view that stock prices are likely to be particularly vulnerable to social mood because there is no generally accepted approach to stock pricing; different analysts use different models in different ways. The potential influence of social mood is even greater among non-professionals who have little, or no, understanding of pricing models and financial analysis. Nofsinger (2005a) saw the link to be so strong that stock market prices could be used as a measure of social mood. Peaks and troughs of social mood are characterised by emotional decision-making rather than rational evaluation.

Cognitive evaluations indicating that stocks are overpriced are dominated by a mood of optimism. Support for one’s downplaying of rational evaluation receives support from the fact that others downplay rational evaluation. The optimism of others validates one’s own optimism. It is often argued that the normal methods of evaluation are no longer applicable in the new era. Fisher and Statman (2002) surveyed investors during the high-tech bubble of the late 1990s and found that although many investors believed stocks to be overpriced, they expected prices to continue rising.

Eventually social mood passes its peak and cognitive rationality comes to dominate social mood. Investors sell and prices fall. If social mood continues to fall, the result could be a crash in which stock prices fall too far. The situation is then characterised by an unjustified level of pessimism, and investors sell shares even when they are already underpriced. Investors’ sales drive prices down further and increase the degree of underpricing. Fisher and Statman (2000) provided evidence that stock market movements affect sentiment.

A vicious circle could develop in which falling sentiment causes prices to fall and declining prices lower sentiment. Taffler and Tuckett (2002) provided a psychoanalytic perspective on the technology stock bubble and crash of the late 1990s and early 2000s, and in so doing gave a description of investor behaviour totally at odds with the efficient markets view of rational decision-making based on all relevant information. They made it clear that people do not share a common perception of reality; instead everyone has their own psychic reality.

Total Quality Management Schools buy argumentative essay help: buy argumentative essay help

Unlike the previous Fayolian process texts, Drucker developed three broader managerial functions: (1) managing a business; (2) managing managers; and (3) managing workers and work. He proposed that in every decision the manager must put economic considerations first. Drucker recognized that there may be other non-economic consequences of managerial decision, but that the emphasis should still be placed on economic performance.

Deming, an American, is considered to be the father of quality control in Japan. In fact, Deming suggested that most quality problems are not the fault of employees, but the system. He emphasized the importance of improving quality by suggesting a five-step chain reaction. This theory proposes that when quality is improved, (1) costs decrease because of less rework, fewer mistakes, fewer delays, and better use of time and materials; (2) productivity improves; (3) market share increases with better quality and prices; (4) the company increases profitability and stays in business; and (5) the number of jobs increases.

Deming developed a 14-point plan to summarize his teachings on quality improvement. These fourteen points are listed> 1. Create consistency of purpose toward the improvement of product and service, and communicate this goal to all employees. 2. Adopt the new philosophy of quality throughout all levels with the organization. 3. Cease dependence on inspection to achieve quality; understand that quality comes from improving processes. 4. No longer select suppliers based solely on price. Move towards developing a long-term relationship with a single supplier. 5.

Processes, products, and services should be improved constantly; reducing waste. 6. Institute extensive on-the-job training. 7. Improve supervision. 8. Drive out fear of expressing ideas and concerns. 9. Break down barriers between departments. People should be encouraged to work together as a team. 10. Eliminate slogans and targets for the workforce. 11. Eliminate work quotas on the factory floor. 12. Remove barriers that rob workers of their right to pride of workmanship. The contemporary management school brings a more interdisciplinary approach to the field of management.

The very important writings of W. Edwards Deming in the area of productivity improvement and those of Peter Drucker on MBO and management innovation have a major impact on the way today’s organizations are managed. The integrative methodologies of the systems approach and contingency theory give managers the latitude they need to integrate the research of the many management schools. 13. Institute a program of education and self-improvement. 14. Make sure to put everyone in the company to work to accomplish the transformation.

Joseph Juran’s experience led him to conclude that more than 80 percent of all quality defects are caused by factors within management’s control. He referred to this as the “Pareto principle. ” From this theory, he developed a management trilogy that included quality planning, control, and improvement. Juran suggested that an area be selected which has experience chronic quality problems. It should be analyzed, and then a solution is generated and finally implemented. The quality work of Joseph Juran and W. Edwards Deming changed the way people looked at business.

Anthroprogenic Climate Change and Its Effects get essay help: get essay help

Human-induced climate change will have various effects on our landscapes and on our way of life. Discuss the causes of anthropogenic climate change, and using examples from Ireland and internationally, outline some of the potential impacts that climate change, and society’s responses to it, will have on our future geography. Anthropogenic climate change is the greatest threat to the planet.

The Intergovernmental Panel on Climate Change (IPCC) Fourth Assessment report states that most of the observed increase in global average temperatures since the mid-twentieth century is very likely to be due to the observed increase in anthropogenic greenhouse gas concentrations. In order to get a better understanding this essay will discuss briefly the differences between climate change and anthropogenic climate change and the affects that this will have on a global scale.

Using examples from Ireland and the Mediterranean we can see specifically how climate change will affect the environment we live in on several different levels, physically, ecologically, socially and economically. To conclude what we can do to mitigate the affects of climate change is discussed. Climate change is something which occurs naturally and is affected by the following: the hydrospheric, atmospheric, cryospheric, and biospheric systems. It is also affected by solar radiation from the sun and the Milankovich cycles (the earth’s axial tilt and orbit around the sun). These systems interact with one another to give the earth its climate.

However with the population explosion which has occurred since the industrial revolution, from less than one billion in the 1800’s to almost seven billion today(http://esa. un. org/unpp), the pressure human activity has put on these systems is causing the earths climate to change and warm at a much more rapid pace than would occur naturally. We know that this increase in temperature is occurring due to records kept since the beginning of the 1900’s, ice core samples and through proxy records. Greenhouse gases (GHG) have the most significant impact on the climate. What are greenhouse gases and how does human activity affect them?

Greenhouse gases are carbon dioxide, water vapour, methane, nitrous oxide and chlorofluorocarbons (CFCs). These gases trap solar radiation from the sun in the atmosphere causing a warming affect. Human activity and the production of GHGs affect the environment in the following ways: changes in land use through agriculture, urbanisation and deforestation; the use of aerosols which release harmful CFCs and other halocarbons into the atmosphere. However the most important anthropogenic cause of climate change is the use of fossil fuels which release carbon dioxide into the atmosphere.

The world is almost entirely reliant on fossils fuels for its supply of energy. Agriculture and urbanisation change the earths Albedo affect (reflective properties) thus affecting the amount a radiation from the sun reflected back out to space. The increase in agriculture to meet the world’s demands for food is “thought to contribute more than twice as much methane as natural resources” (Middleton, 2003, p183). As we know trees absorb carbon dioxide, with deforestation however we reduce the earth’s ability to absorb carbon dioxide and produce oxygen.

CFC’s and halocarbons are not naturally occurring compounds, they increase the atmospheric albedo and alter cloud properties, they also last for a very long time in the atmosphere, from 60 to 100 years and have contributed to the depletion of the ozone layer. Carbon dioxide is the biggest contributor to climate change. “Human activities such as burning fossil fuels, coal, oil, gas, for use in power stations, industry and transport, have increased atmospheric carbon dioxide by 35 per cent since the beginning of industrialisation”. http://www. noaanews. noaa. gov). The potential impacts of climate change can be ecological, physical, social and economic. The physical aspects of anthropogenic climate change can be seen in the increase in the world’s average temperature, increased precipitation, a rise in sea levels, reduced snow cover and melting of polar icecaps and an increase in extreme weather events such as heat waves, cyclones and flooding. These events in turn have an impact on ecological systems where they occur and could cause the extinction of some species.

If we take flooding, an increase in sea levels and an increase in the ph of the sea due to absorption of CO2 as an example, we will see the following; impacts on marine diversity and possible of some species unable to adapt to the warmer temperatures or higher ph levels, bleaching of coral reefs and coastal erosion. Economically and socially the increase in sea level may cause coastal towns problems as beaches and harbours are affected by higher tides, fishing industries affected by the loss of marine diversity and local underground drinking water supplies contaminated by sea water.

In extreme situations a coastal town may be forced to move. In Ireland the climate is expected to increase by 3-4 degrees by the end of the century, with the south and east experiencing the most significant warming. The Community Climate Change Consortium for Ireland report Ireland in a warmer climate (2008) states that the autumn and winter months will be wetter and milder and that we should also expect intense cyclones from the Atlantic more frequently, due to the rise in surface temperature of the ocean.

If we look elsewhere for examples of climate change we should look no further than the Mediterranean. The Mediterranean is a region of vulnerability because climate change there will increase temperature and decrease precipitation leading to a lack of water, which will in turn have an impact on agriculture and the regions main source of income tourism. This increase in temperature in an area already known for its warm dry summers and mild winters will lead to an increase in wild fires, soil erosion and a higher demand on water for irrigation systems.

The Mediterranean sea level is expected to rise, this will have a significant impact on coastal areas and low lying islands who rely on their on their coasts and beaches for tourism. This increase in sea level will also have a very serious impact for the city of Venice. The Mediterranean is well know for its fresh fish and scuba diving, an increase in the ph of the sea there will have a detrimental affect on biodiversity and will also cause coral bleaching. “As a result the Mediterranean has become a climate change Hot Spot” (www. planbleu. org).

What can we do to reduce and or prevent further climate change? Most countries have now accepted the need to make an effort to reduce emissions of GHG’s. The first major step on a global scale came in 1987 with the Montreal Protocol which was an agreement to reduce CFC’s and other halocarbons which destroy the Ozone. There have been other international agreements to reduce CO2 emissions such as the UN Framework Convention on Climate Change in 1992, where 150 countries signed up in joint effort. The Kyoto Protocol expanded on the UNFCCC by focusing not just CO2 but on other harmful GHG’s as well.

Other government initiatives include tree planting to increase the Earths carbon sink capacity, carbon sequestration which involves capturing CO2 and storing it, reduce dependency on motor vehicle use in cities and to reduce the worlds global dependency of fossil fuel based power sources. In order to do this, governments need to encourage the development of sustainable, renewable energy sources such as wind, hydro, tidal, solar and geothermal power. Ignorance, the tragedy of the commons, poor valuation, dependency and exploitation are all key issues which contribute towards anthropogenic climate change.

To change this education will play a key roll. Currently all the environmental issues that we face arise from people deliberately or inadvertently misusing or abusing the natural environment. Climate change is the greatest challenge facing humanity and its impacts will not be distributed equally or evenly around the world. Developing countries in Africa are likely to feel the affects much more severely than mainland Europe. So it is essential that developed countries commit themselves to fulfilling there agreements under the UNFCCC and the Kyoto and Montreal Protocols. To summarise climate change is a naturally occurring system of the earth.

However human activity has increased the amount of GHG’s in the atmosphere putting the carbon cycle and climate system under severe pressure. Anthropogenic climate change currently has the upper hand. The affects that are being felt around the globe may not in some cases be reversible and the loss not realised or calculated for many years to come. In order to reduce or negate the impact of anthropogenic climate change there are steps which we as a global nation can and must take to ensure that future generations may enjoy the Earth as we have.

Hr Accounting Policies in Infosys cheap essay help: cheap essay help

By early 2000, many companies in India had started valuing their human capital and reported the same in their balance sheets and other financial statements. Briefly explain the concept of valuation of human resources and compare the various models available for human resources accounting. Ans: HRA involved identifying, measuring, capturing, tracking and analyzing the potential of the human resources of a company and communicating the resultant information to the stakeholders of the company.

It was a method by which a cost was assigned to every employee when recruited and the value that employee generated during the tenure he/she worked . for the company. HRA reflected the potential of the human resources of an organization in monetary terms, in ifmancial statements. The two main components of HRA were investment related to employees and the value generated by them. Investment in human capital included all costs incurred in increasing and upgrading the employees’ skill sets and knowledge of human resources. The output that an organization generated nom human resources was regarded as the value of its human resources.

The costs, which were incurred on the development of human resources, were with intention to obtain future benefits. Therefore, these costs were not to be treated as expenditure,but investments, future revenues or assets. The expenditure incurred by an organization on recruiting, selecting, trainig and developing employees had to be capitalized and shown in the balance sheet as assets, as the humans possess some skills, knowledge and experience which could be turned into value for tbe organization.

However’, it was argued by some critics that costs did not reflect the true value and truevalue could be known only by the difference between real performance and the cost incurred, associated with the human resources of the organization. 2. Replacement Cost Method: The cost incurred by an organization on replacing the earlier employees and strengthening. the organization further, had to reflect the human resource value of both the employees and organization.

Critics argued that it is difficult to assess the replacement cost of the employees as the value, which they generated over period of time and their contribution to the organization was difficult to measure in relation to cost incurred to employ them. 3. Opportunity Cost Method: According to this model, the potential monetary value to be generated by an employeewas be estimated by allocating the employee to an actiyity in which he/she best fitted. In otherwords, opportunity cost of key employees in the organization was assessed in relation to their performanance and in accordance with the organizational goals.

The investment managers used to bid for t] employees and the highest bid for an employee was considerd his price, which was to be reflect in the balance sheet. The bid price was a measure of the employee’s competence and experience,a1 the value that he would generate for the organization. Citics argued that competitive biddin. involved assessing the future coniribution of an employee to the organization’s goals make more individuals to disassociate themselves from the bidding process, thereby making it difficult for the organization to measure their value.

They further argued that the bid price placed on employee may be based on the perception of the bidder, which may not give a correct estimation the employee’s true value. The value to be generated by an employee was relative and hence this measurement cold not be effective. 4. Standard Cost Method: According to this model, the cost of recruiting, selecting, training and developing a particular grade of employees were standardized. These costs were determined and evaluated over the years to get the total value of the human resources in an organization 5. Goodwill Method:

This model was developed by I-Iarmonsonand was also called the Hormonson model. According to this model, the additional profit earned by an organization during a particular period of time was compared to the industry’s ayeragerat. Q2. Explain in detail the HRA model adopted by Infosys. What benefits did the company reap after valuing its human resources? Ans: HRA AT INFOSYS The company used the Lev & Schwartz Model (Refer Exhibit I) and valued its human resources assets at Rs 1. 86 billion. Infosys’ HRA model was based on the present value of the employees’ future earnings with the following assumptions: An employee’s salary package included all benefits, whether direct or otherwise, earned both in India and in a foreign nation. . The additional earnings on the basis of age and group were also taken into account. To calculate the value of its human assets in 1995-96, all the 1,172 employees of lrifosys were divided into five groups, based on their average age. Each group’s average compensation was calculated. Infosys also calculated the compensation of each employee at retirement by using an average rate of increment.

The increments were based on the industrY standards, and the employee’s performance and productivity. Finally, the total compensation of each group was calculated. This value was discounted at the rate of 27. 36 percent per annum which was the cost of capital of Infosys, and the sum of the values of all the groups was calculated to arrive at the figure ofRs 1. 86 billion. The formula used by Infosys as per the Lev and Schwartz model was: H = IE (IR Ie(y)/(l +dy,Pe where, H = discounted present human capital value for all individuals in the company le(Y)= annual earnings of employee e for the year ‘y’ = discount rate specific to the cost of capital of the company .The company could determine whether its human asset was appreciating over the years or not. This information was important for the company as its success depended solely on the knowledge of the employees. In addition, the company could also use this information internally to compare the performance and productivity of employees in various departments. HRA also helped Infosys to decide the compensation of employees.

The company ensured that it compensated each employee according to his/her worth. Q3. What are the possible disadvantages of the evaluation of human capital by organizations? Do you think it is ethical on the part of an organization to place a monetary value on its employees? Explain. Ans: The disadvantages of evaluation of human capital by organizations is such that many organizations do not project a true and fair view of financial position by valuing human capital, it might also result in might result in underestimation of some efficient employees and over-estimation of some others.

Also, various companies used various models of HRA and comparing two companies using two different models would be difficult. Companies could misuse HRA to enhance their image Putting monetary value on its employees is has both its pros and cons, on societal and human perspective , it might be that adopting this approach would amount to treating humans as commodities, doubting an individual’s abilities, knowledge, skills and experience. Furthermore assigning a definite value to each individual may not be proper because knowledge of each individual differed from that of another. But,

In a dynamic environment, changing trends , surviving in the global village is like dangling from double edged sword, employees having realized that ar forever on competitive tetherhooks and hence it is justified to value human assets as it changed the perspective from which the companies viewed their employees’ utility to the organization. The financial strength of a company is determined by all the resources in the company, including human resources. Companies felt that HRA would gain popularity in India only when organizations would move away from the traditional management style, which gave less importance to people.

There were also countless functional benefits such as – 1. take managerial decisions based on the availability and the necessity of humanresouw it gave the investors and other clients insights into the organization and its future potential. Proper valuation of human resources helped organizations to eliminate the negative effects of redundancy helped them to channelize the available skills, talents, knowledge and experience of their employees more efficiently

Leadership in the Public Safety Environment college essay help los angeles: college essay help los angeles

Today’s leaders are faced with many obstacles that either makes or breaks them in their roles as leaders. The questions and concerns about how leaders should act, how they should lead and even how much experience they may have as a leaders all seem to be the questions and concerns. The public safety environment as a whole is one of those organisms that is face with continued scrutiny concerning the performance of leaders.

Often times, people look for their local government and law enforcement agencies to be the leaders in deterring and stopping crime, when the tables have turned and majority of the crimes are committed by government and law enforcement officials. For example, in September, 2005, 7 New Orleans police officers, known as the “Danzinger 7” who were tasked with protecting and serving the people of New Orleans during this time major time of crisis, found themselves indicted for killing and wounding innocent citizens trying to seek safety (Hampton, 2007).

This outrage, created such a disruption in the city, until the people did not trust the leaders who were suppose to protect them. This type of behavior is certainly the face of pseudo-transformational leadership. Pseudo-transformational leadership (i. e. , the unethical facet of transformational leadership) is manifested by a particular combination of transformational leadership behaviors (i. e. , low idealized influence and high inspirational motivation), and is differentiated from both transformational leadership (i. e. high idealized influence and high inspirational motivation) and laissez-faire (non)-leadership (i. e. , low idealized influence and low inspirational motivation), (Barling, J, Christie, A. & Turner, N. , 2007). There could no better way to describe this type of behavior, especially when it is put before the eyes of the people. The issue of situation concerning the “Danzinger 7” was a tragedy, but to add insult to the injury was that 7 sworn officers, tried to cover up their wrongdoing and there were actual supervisors on scene that supported their wrongdoing.

This is a clear recipe for disaster within a department that has seen its uproar of scandals and investigations. When an officer takes into his or her own hands the law and put aside what they are sworn to uphold to save themselves, he or she represents the clear meaning of what pseudo-transformational leadership is. However amid of the scandal that have this disgrace the New Orleans Police Department, the efforts to restructure, train and retrain, enforce and reinforce policies is the way of creating a safe, working and trusting department.

On the hand, to every negative for of leadership, there is a positive side. Transforming leadership is to seen to more demanding, but its results are solid. Vinzant & Crothers (1998) points out that transformational leaders focuses effort and makes choices based on goals, values, and ideals that the leader determines the group or organization wants or ought to advance (Meese & Ortmeier, 2004, p 54). This leadership styles presents a clear picture of how any organization or group should be operating. The end results of any leader should be quality, quantity.

When quality work is presented, the public as whole develops a level of respect for those who are serving them. The demand for transforming leadership is constantly changing and evolving as times change. People look for transparency within their their public safety organizations and expect those in leadership to understand what is going and to be able to offer a greater solution to their problems. At the end of the day all anyone wants is fairness and justice. The people expect the leaders to be leaders and take care of them as they are entrusted to do and not become a suspect in the system.

Outline and Evaluate Bowlbys Theory of Attachment college essay help near me: college essay help near me

Outline and evaluate Bowlby’s theory of attachment (12 marks) Bowlby was an evolutionary Psychologist who believed that attachment is a part of evolutionary behaviour and focus on an animal’s instinctive and innate capabilities, and the functions of their behaviour. They believe this is useful for learning about human instinctive and biological behaviour. Attachment behaviour keeps a young animal or human safe. It is behaviour seen in all species of animal.

Many species of animal form rapid attachments to either mother almost immediately after birth and young babies follow their mothers around as soon as they can physically walk and use their mother as a secure base for exploration. The critical period hypothesis states that if you fail to attach, or suffer from a disruption bond between 1-3years, then you will suffer from long term irreversible cognitive, social and emotional problems. Evidence to support this includes privation studies such as orphanages.

This supports the critical period as the children had no attachment during the critical period and did suffer from long lasting and irreversible consequences. However, some privation studies have shown that even children who suffered privation during the critical period have recovered. This has led some psychologists to recall it the ‘sensitive period’. Social releasers are instincts that babies are born with to attract parent’s attention. These included crying, sucking, clinging, gripping and imitating. These help in attachment because they release/ trigger the parent’s instinct to respond to the biological needs of a baby.

This had been supported by Klaus and Kennel who stated that mothers who had prolonged skin to skin contact with their mothers had a stronger attachment bond. The time had enabled the parents to ‘switch on’ their maternal instincts. However, this has been criticized because maternal instinct can always be there not just when you’ve had a baby so most women’s hormones make them react to social releasers even when it is not their baby. Monotropy Hypothesis states that you attach to one person initially to ensure your survival, and then go on to develop a hierarchy of attachments with other family members.

This has been supported by Tronick et al who found that even in tribes where it was common to be breastfed by other women, the child still had a preference for its mother. However, it has been criticized by Parke et al who said that babies need multiple attachments from birth for different roles. For example the dad is very important for rough play and excitement. The internal working schema is a pocket of information where we store information in our brain. We have a schema for everything. The internal working model is our schema for relationships.

Our schema for relationships is full of information about what we know about relationships. It is heavily influenced by our first relationship with our parents and this is what we have come to expect from a relationship. Evidence for the schema comes from Hazan and Shaver’s ‘Love Quiz’ where they found that you attachment type with your mother reflected your expectations in later ‘love’ partnerships. This has been criticized because our schema for relationships is also influenced by future relationships with other people other than your mother.

Bowlby’s theory explains how and why we attach and has lots of evidence and further research to support him but it’s very difficult to test the claim that we have developed instincts from evolution. He has explored and explained the consequences of what happens if we don’t attack as well as what happens when we do, but this is him only focusing on a mother and child relationship. It also seems to put too much emphasis on instinct and doesn’t explore experiences and how we learn about attachment. However, many Developmental Psychologists have used at least some of his ideas.

Relationship Between Rising Us Unemployment grad school essay help: grad school essay help

Is there a relationship between rising US unemployment and the rise of the Canadian dollar? Canada’s financial stability depends on the health of America’s economy, as international trade accounts for 45% of Canada’s Gross Domestic Product (GDP) and 79% of exports are to the United States. Canadian and American unemployment rates are positively correlated for that reason, as exemplified in early 2009. Canada’s unemployment rate quickly steepened as the United States’ rate gradually increased to about 10% (refer to graph 1 and 2). During this time, Canada’s growing trade surplus became a deficit in only a few months (refer to graph 3).

From this data, one can determine that Canada’s exports decreased rapidly due to rising economic turmoil in the United States. The effects on the dollar seemed to positively correlate. Canada’s dollar decreased in value compared to the US dollar; however, concluding that the reason for this change was due to the U. S. unemployment rate is inaccurate. The ever-changing exchange rate of the dollar is determined by many factors. As of 2011, Canadian and American unemployment rates remain high at approximately 7. 3% and 9%, respectively. In addition, a trade deficit continues to exist in Canada.

Nevertheless, the Canadian dollar is gaining strength over the American dollar, which contrasts with the weakened exchange rate in 2009 when the same conditions existed (refer to graph 4). Therefore, rising U. S. unemployment can have a positive or negative affect on the Canadian dollar. I will examine how the increasing U. S. unemployment rate can potentially strengthen or weaken the loonie. A rise in U. S. unemployment can indicate a relationship with the rise of the Canadian dollar. The increase in U. S. unemployment is a result of America’s 2008 recession. When unemployment increases in America, the U.

S. Federal Reserve needs to decrease interest rates in order to stimulate the economy (as exemplified in graphs 2 and 5). If interest rates are high, consumers will save their money and try to cut back on spending. If interest rates are low, consumers are encouraged to borrow money from the banks and spend it. Although lower interest rates can improve domestic spending, it will discourage foreign investors because the return on their investments decreases. As of September 2011, the US interest rate is close to 0% and Canada’s rate is at 1% as determined by the Bank of Canada.

Low interest rates indicate that the US is in poor financial health. More countries will want to invest in Canada if the US is in a risky financial situation and additionally, provides low returns. The value of the Canadian dollar rises when demand increases. Due to America’s increasing government debt and declining interest rates, Canada’s economy is comparatively more stable. Canada’s credit rating is currently higher than America’s. A credit rating indicates the financial health of a country and how large the risk is for lenders to invest.

Standard and Poor’s, a credit rating agency, currently downgraded America’s rating to AA+ from the highest score of AAA. The downgrade was a result of the U. S. ’s increasing debt from the recession. Canada’s credit rating is still the highest at triple-A because of the country’s stability. Canada is more attractive to invest in, as there are solid returns on the interest rates and low risk for lenders. Increased demand in Canadian assets can result in a higher exchange rate. Investors want to put money into a country where they believe it is a safe haven.

Although Canada can become more appealing to foreign investors, a rise in U. S. unemployment can be a factor in weakening the Canadian dollar. The U. S. dollar decreases in value when unemployment is high because the government receives less tax revenue. If the U. S. dollar falls, Americans have less purchasing power and Canadian commodities become more expensive. As a result, U. S. consumers will be discouraged to spend and/or import from Canada. Additionally, a weaker U. S. dollar will decrease domestic expenditures in Canada because Canadians can purchase more for their dollar in the United States.

If consumer spending and exports decrease, Canada’s economy can decline simultaneously with the American economy. As mentioned above, international trade accounts for 45% of Canada’s GDP. On April 1, 2011, the loonie hit a three-year high after U. S. employment increased and the jobless rate decreased. The dollar increase from a decrease in U. S. unemployment exemplifies America and Canada’s trading partnership and coinciding economies. The strength of the U. S. economy is important for Canada’s trade, economy and exchange rate. Increasing U. S. unemployment can positively or negatively influence the value of the loonie.

Canada’s dollar can increase due to demand from buyers looking to invest in a stable economy with low risk. On the other hand, Canada’s GDP is dependent on U. S. trade. If less Americans are purchasing goods in Canada due to increased unemployment, the Canadian dollar will fall. Therefore, it is inconclusive that unemployment rates solely influence the rise of the Canadian dollar. There are many other factors that influence the exchange rate, such as commodity prices, investor confidence, overvalued currency and inflation rates.

Cultural and Ethnic Identities of These Individuals college essay help service: college essay help service

Rumspringa, defined as “running around” is a time when the Amish youth at the age of 16 decide whether to remain in or leave their community and faith. During this time teens area allowed to enter and lead a life in the “English” world and participate in partying, drinking, illegal drugs and pre-marital sex. During rumpsringa teens are exposed to a myriad of things that they normally would not have been able to in their regular Amish life. This stage of their life highly affects the cultural identity of these young adults.

It causes these kids to either want to go back to home and join the Amish church or run from it as far away as possible. They are exposed to all the things they would usually in their day to day life are told is a sin or the devils way of living. One of the main things you see some if not all is smoking cigarettes, even the girls while they are still dressed in their Amish cloth. “Cultural group membership is acquired though the guidance of primary caretaker and peer association during out formative years” Toomey and Chung p. 3. This time is part of a bigger problem for the Amish sect as it brings about a mind set of total independence on the part of their youth; something many, especially boys, have difficulty handling appropriately at this young age. In addition, it is viewed by some as “a casual look the other way time” on the part of the Amish parents and other adults. It can be acknowledged that some Amish parents do relax their standards some when their offspring turn 16 and some permit exploration to an extent.

However, it is hard to believe any Amish parent would ever tell their 16 year old to go out and experience the “world” as one is led to believe by this documentary. With out the guidance of “primary caretakers”, the Amish parents to guide these youngsters into the right line of cultural identity it starts to cause the lose of cultural identity with the future of the Amish culture as more and more teens start to choose not to go back and get baptized and join the Amish church.

After going to rumspringa many of the teens seem to lose the emotional attachment with the Amish life and affiliation. They seem to enjoy a life of vehicular transportation, electricity, alcohol, and cable TV all of which are not part of the Amish culture. It can be argued that these teens lose cultural identity during this time by looking at the “value content” and “cultural identity Salience”; the reasoning behind many of the teens going though rumpsringa to decide to stay as part of the “English” world for the time being or not to join the Amish church at all.

They also go form being part of a collectivism culture to an individualistic one which they seem to enjoy better. Ethnic identity of these individuals tends to stick with them which ever path they choose to take. Gerald speaks about how even if he does not go back to be part of the Amish church how he will always think about whether he will go to heaven or hell an idea drilled deep in his mind growing up Amish. Faron also still has his Amish Ethnic Identity when he speaks to his parents his accent changes in the way he talks to them.

Though they might have Amish as their Ethnic Identity these individuals acts conflict with that of this particular ethnic norms and behaviours during the time of rumspringa. Looking at the different ways the teens going through rumspringa act it can be concluded that they fall under marginal identity due to their weak ethnic and cultural identity where they are no longer connected with their ethic group of Amish society leaving them in a state of ambiguity and alienation.

Racism in the Media in the United States get essay help: get essay help

In this research paper you will identify a social problem/issue related to contemporary racial and ethnic inequality in the United States and research all that you can about that problem. You will explore in your paper: What is the problem/issue? How is the problem defined from a sociological perspective (meaning, what are the social and cultural causes of the problem)? How do you know it’s a problem? What is the evidence? What racialized/ethnic/underrepresented group(s) is/are impacted?

How are they impacted? How does the problem relate to Beverly Daniel Tatum’s definition of racism, particularly in relation to institutional policies and practices? What institutions are involved? Use the “Four themes of institutional racism” to help you evaluate how institutions perpetuate the problem. Be sure to discuss the historical processes within the relevant institutions that have led up to the contemporary conditions. How are cultural messages relevant to the problem?

How is privilege relevant to the problem? Be sure to integrate sociological concepts from the course and define and cite terms you use. Write in essay form – e. g. , don’t just answer the above questions – arrange your essay such that the above questions are answered, but don’t just list the answers out 1, 2, etc. Minimum references required: Two books and two scholarly journals. Be sure to ask me if you have questions about what “scholarly” means.

You may use additional sources, but it would be best to ask me also about the quality and reliability of the sources. Cite references and include a “References” section at end of paper (not included in page number requirement (see ASA format) Format: 4-6 full pages, double-spaced, 1 inch margins, 10-12 point Times New Roman or Arial font (or something comparable) Your grade (out of 200 points) will be based on how well you: -define and explain the problem from a sociological perspective (and show how group(s) are impacted).

The Mans Yard You Should Not Hit Your Ball my essay help uk: my essay help uk

From the minute I read the title of today’s poem, I knew I was going to enjoy it. I chose this poem because the title reminded me of a very familiar childhood movie, The Sandlot; because of this I thought I could interpret it the best out of all of them. This poem is unique in that the title is actually the first two lines of the poem. Right from the start the title says a lot. The title is very direct in the way it leads into the poem. Readers don’t have to guess what this poem is going to be about.

Also, almost everyone that has lived in some kind of neighborhood can relate to it. Anyone that has had a mean neighbor can in some way share this type of experience. Or maybe it’s just a memory of that one house everyone knew you cannot go around or into; parents are always telling their children to stay away. The image that almost immediately popped into my head was The Sandlot. The next image that came to mind from the poem was when Lux says “and mowed his lawn, his dry quarter-acre, the machine slicing a wisp from each blades tip. The lines use a lot of imagery to create a scene for the reader. The line depicts him mowing the lawn every day. It suggests that he lives a very meticulous or tedious and planned lifestyle. He mows everyday at six o’clock no matter if it’s spring, summer, or fall. This leads me to believe that this man is a man of habit. Lux also says if he could, he would mow the snow. The poem shifts somewhat and depicts him as being somewhat uptight and miserable, living this same kind of boring or depressed lifestyle day after day. The poem then goes on to describe his wife.

Lux claims she is like “shoebox paper”. She is brittle and very easy to break. He states that she is fragile, that she is like a “broken apron. ” The line suggests that in some way he “broke” her or seriously hurt her leading into the next line, “As if into her head he drove a wedge of shale. ” It soon after mentions his daughter, “Years later, his daughter goes to jail. ” This line adds to the overall mood of the poem. The reader can feel some kind of empathy for the character in this poem because of his life and relationships.

Lux then talks about how the pasture between his house and the old man’s house was a “Field of fly balls, the best part of childhood and baseball” where the main character would go to hit, and if one crossed the line into his yard it became what Lux calls ‘coleslaw’. “His mower ate it up, Happy to cut something no matter what the manual said about foreign objects, stones, or sticks. ” The main character gladly ran over and destroyed the baseball with his mower, whenever they would trespass onto his lawn.

Consumer Spending in Asia narrative essay help: narrative essay help

Asia is the world’s largest and most populous continent. Interestingly the countries which fall under Asia vary in size, environment, historical ties and governance systems. Thus the wealth of these countries differs quite drastically. For example in terms of Gross Domestic Product, GDP (“the market value of all the goods and services produced by labour and property located in a country” (About. com 2009)), Japan has the largest economy on the continent.

In fact measured in terms of GDP Japan has the second largest economy in the world (Wikipedia 2009). Yet this is a far cry from other Asian countries such as Pakistan and Bangladesh, where the annual turnover of some large Multinationals exceeds the national GDP. Unfortunately despite the fact that Asia accounts for roughly 60% of the worlds population (wikipedia 2009), it has been overshadowed (in economic terms) by the shear might and power of the western economies, namely America.

However in a bizarre twist of fate, sparked by the now infamous credit crunch, which has had a devastating effect on the once robust economies of the West, many are now asking the question, can Asians replace Americans as a driver of global growth? (Economist June 2009). These Asian countries or economies are often referred to as the ‘Emerging Markets’. This definition is often widely used and loosely defined. The term ‘Emerging Markets’ was first coined by by Antoine W. Van Agtmael of the IFC (International Finance Corporation) of the World Bank in 1981 (Heakal 2009).

It is used to describe fast growing economies, which have embarked on economic development and reform programs (Heakal 2009). Thus they are considered to be transitional economies, as they are moving from a closed economy to an open economy, whilst importantly building accountability within the system (Heakal 2009). China and India are examples of two prominent ‘Emerging Market’ Countries. Gone are the days these economies were ignored. The growing economic strength of these countries, one could go as far as to say may be seen as a threat to current international business.

China and India use their generating wealth to actively compete with the West (Ashburton, 2006). For example, the take-over of Corus Steel by the Indian company, Tata made it the largest Indian take-over of a foreign company and the world’s fifth largest steel firm (BBC News, 2006). Another example is of the Indian company Taj Hotels positioning itself as a global player as succeeding Four Seasons Hotels in operating as a New York City landmark. As many multinationals face domestic market saturation (Fenwick, 2001) they could undoubtedly benefit from accessing these huge markets.

The purchasing power of China is greater than that of any other country in Asia, and the second largest in the world (Wikipedia 2009). However the economies of these ‘Emerging Market’ countries vary considerably from the west in terms of culture and it has been argued that unlike countries in the West, individuals have a tendency to save rather than spend, thus have large current account surpluses. However the statistics tell a rather different story. ‘In China, India and Indonesia spending has increased by annual rates of more than 5% during the global downturn.

China’s retail sales have soared by 15% over the past year’ (Economist 2009) . These are phenomenal numbers. This includes government spending thus does overstate the numbers, however according to official household surveys, the percentage increase is in fact more in the region of 9%. This is highly impressive in comparison to the downturn in the west. Sales of cars have increased by a staggering 47%, clothes 22% and sales of electronics have increased by 12%. Ironically while car sales were up in Asia, the American taxpayers had to bail out the once massive Ford.

However its not good news across all of Asia, spending has suffered as a result of increased levels of unemployment and lower wages in countries such as Hong Kong, South Korea, and Singapore. In these countries real consumer spending was 4-5%. Yet there are positive signs in countries such as Taiwan, where retail spending rose in May for the third consecutive month, that spending is beginning to increase. The fact remains, relative to American consumer spending, Asian consumer spending has soared (Economist 2009).

However despite the strong growth and purchasing power of China, the fact remains that in dollar terms China’s population spend 1/6th of that in America. This explains in part why the Chinese Government have taken such bold steps to boost consumption. For example they have made it easier to borrow, as well as issuing a number of subsidies for villagers, enabling them to buy vehicles and electronic goods such as TV’s, computers and mobile phones. This is a Government who wants its people to dig deep into their wallets and spend. Furthermore there are sufficient grounds for a positive outlook for the future.

As incomes rise, this will no doubt have a positive effect on future sales. At the moment, only 30% of rural households own a refrigerator (compared with urban households). If the hopes of the governments in Asia are to be met, and consumer spending is to continue to soar, the answer lies in financing. The developed countries have a household debt to GDP ratio of around 100%, this is significantly higher than that of most Asian economies whose household debt is less than 50% of GDP. In particular in China and India, this is even lower at 15%.

Interestingly the one exception to this is South Korea, where households have as much debt relative to their income as Americans. It seems the Chinese Government have plans in progress to tackle this. As in May this year the Chinese Bank, began planning legislation which will allow foreign institutions to set up consumer-finance firms, which will allow loans for consumer-goods purchases. However perhaps the biggest question is whether these governments will allow their exchange rates to rise, to allow the shift of balance between growth from exports to domestic spending.

The rise in exchange rate would increase consumer’s real purchasing power and arguably more importantly give companies a reason to start producing goods for the domestic market. Unfortunately these governments have been reluctant to allow currencies rise too fast. Asian spending is without a doubt an important part of global growth. Surprisingly prior to the financial crisis which has hit the west, Emerging Asia’s consumer spending contributed slightly more (in absolute dollar terms) to the growth in global demand than did America’s (Economist 2009).

For a long time Globalisation and free markets, have been blamed for widening the gap between the rich and the poor. It has been argued markets create the ‘Progressive exclusion of the poor’ (Patnaik 2003 p. 62). Indeed there has been much research which has reached the conclusion capitalisation has been ‘dominated by uneven development, in which divergence is the rule and convergence the exception’ (Weeks 2001 p. 28). Perhaps, and it is a big stretch at the moment, the latest developments indicate a shift to the once overlooked.

However this pessimist cant help but feel, that these Emerging Market economies are far away from truly enjoying the fruits of their labour, and perhaps even much worse, they have only been given a taster, to something which will avail them until their governments wake up to the fact that rather than subsidising western consumers through undervalued currencies, they need to revalue the currencies.

Business Decision Models Assignment college essay help near me: college essay help near me

With this new requirement of shoe stores equally jewelry stores, a number of results will change in our solution. The additional constraint you need to add is that shoe stores equals jewelry stores. The new optimal solution is the following; two shoe stores, two jewelry stores, three department stores, two bookstores and two clothing stores. This total space will equal 9900 square feet and the total profit is $1,390,000. This would decrease the total profit by $20,000 if this additional constraint were added to the problem. c Let J = the number of jewelry stores in the mall, where J is required to be a whole number between 1 and 3. Let S = the number of shoe stores in the mall, where S is required to be a whole number between 1 and 3. Let D = the number of department stores in the mall, where D is required to be a whole number between 1 and 3. Let B = the number of bookstores in the mall, where B is required to be a whole number between 0 and 3. Let C = the number of clothing stores in the mall, where C is required to be a whole number between 1 and 3.

It would be difficult to formulate the profit maximization from this model because the profit for the stores depends on the amount of stores there are. In this model it is difficult to formulate a way to have different profit amounts for different decision variables. If J=1 it would have a different profit than J=2, as it is simply not multiplied by 2, but it is a different amount. J=1 would be 90, but J=2 would be 160. This makes it difficult to formulate the solution using this model. 2a

Let J1 = 1 if one jewelry store is in the mall = 0 otherwise Let J2= 1 if two jewelry stores are in the mall = 0 otherwise Let j3= 1 if three jewelry stores are in the mall = 0 otherwise Let S1= 1 if one shoe store is in the mall = 0 otherwise Let S2= 1 if two shoe stores are in the mall = 0 otherwise Let S3= 1 if three shoe stores are in the mall = 0 otherwise Let D1= 1 if one department store is in the mall = 0 otherwise Let D2= 1 if two department stores are in the mall = 0 otherwise

Let D3= 1 if three department stores are in the mall = 0 otherwise Let B0= 1 if zero bookstores are in the mall = 0 otherwise Let B1= 1 if one bookstore is in the mall = 0 otherwise Let B2= 1 if two bookstores are in the mall = 0 otherwise Let B3= 1 if three bookstores are in the mall = 0 otherwise Let C1= 1 if one clothing store is in the mall = 0 otherwise Let C2= 1 if two clothing stores are in the mall = 0 otherwise Let C3= 1 if three clothing stores are in the mall = 0 otherwise

Essay Writing at Profs Only

5.0 rating based on 10,001 ratings

Rated 4.9/5
10001 review

Review This Service




Rating:











Leave a Comment

Your email address will not be published.