Tuesday, July 11, 2006

AI Tech

Instead of trying to produce a programme to simulate the adult mind, why not rather try to produce one which simulates the child's? If this were then subjected to an appropriate course of education one would obtain the adult brain.
-- Alan Turing, 1950

Artificial intelligence (abbreviated AI, also some times called Synthetic Intelligence) is defined as intelligence exhibited by an artificial entity. Such a system is generally assumed to be a computer.

AI forms a vital branch of computer science, dealing with intelligent behavior, learning and adaptation in machines. Research in AI is concerned with producing machines to automate tasks requiring intelligent behavior. Examples include control, planning and scheduling, the ability to answer diagnostic and consumer questions, handwriting, speech, and facial recognition. As such, it has become an engineering discipline, focused on providing solutions to real life problems. AI systems are now in routine use in economics, medicine, engineering and the military, as well as being built into many common home computer software applications, traditional strategy games like computer chess and other video games.

From:wikipedia.org/wiki/Artificial_intelligence



Copyright 1996 by NEC Research Institute and the Computing Research Association. Contributions to this document by Tom Dean and Jon Doyle are appreciated.

Back to Computing Research: Driving Information Technology and the Information Industry Forward (http://cra.org/research.impact)

Artificial Intelligence:
Realizing the Ultimate Promises of Computing

David L. Waltz
Vice President, Computer Science Research
NEC Research Institute
Bringing Common Sense, Expert Knowledge, and Superhuman Reasoning to Computers

Artificial Intelligence (AI) is the key technology in many of today's novel applications, ranging from banking systems that detect attempted credit card fraud, to telephone systems that understand speech, to software systems that notice when you're having problems and offer appropriate advice. These technologies would not exist today without the sustained federal support of fundamental AI research over the past three decades.

Although there are some fairly pure applications of AI -- such as industrial robots, or the IntellipathTM pathology diagnosis system recently approved by the American Medical Association and deployed in hundreds of hospitals worldwide -- for the most part, AI does not produce stand-alone systems, but instead adds knowledge and reasoning to existing applications, databases, and environments, to make them friendlier, smarter, and more sensitive to user behavior and changes in their environments. The AI portion of an application ( e.g., a logical inference or learning module) is generally a large system, dependent on a substantial infrastructure. Industrial R&D, with its relatively short time-horizons, could not have justified work of the type and scale that has been required to build the foundation for the civilian and military successes that AI enjoys today. And beyond the myriad of currently deployed applications, ongoing efforts that draw upon these decades of federally-sponsored fundamental research point towards even more impressive future capabilities:

  • Autonomous vehicles: A DARPA-funded onboard computer system from Carnegie Mellon University drove a van all but 52 of the 2849 miles from Washington, DC to San Diego, averaging 63 miles per hour day and night, rain or shine;

  • Computer chess: Deep Blue, a chess computer built by IBM researchers, defeated world champion Gary Kasparov in a landmark performance;

  • Mathematical theorem proving: A computer system at Argonne National Laboratories proved a long-standing mathematical conjecture about algebra using a method that would be considered creative if done by humans;

  • Scientific classification: A NASA system learned to classify very faint signals as either stars or galaxies with superhuman accuracy, by studying examples classified by experts;

  • Advanced user interfaces: PEGASUS is a spoken language interface connected to the American Airlines EAASY SABRE reservation system, which allows subscribers to obtain flight information and make flight reservations via a large, on-line, dynamic database, accessed through their personal computer over the telephone.

    In a 1977 article, the late AI pioneer Allen Newell foresaw a time when the entire man-made world would be permeated by systems that cushioned us from dangers and increased our abilities: smart vehicles, roads, bridges, homes, offices, appliances, even clothes. Systems built around AI components will increasingly monitor financial transactions, predict physical phenomena and economic trends, control regional transportation systems, and plan military and industrial operations. Basic research on common sense reasoning, representing knowledge, perception, learning, and planning is advancing rapidly, and will lead to smarter versions of current applications and to entirely new applications. As computers become ever cheaper, smaller, and more powerful, AI capabilities will spread into nearly all industrial, governmental, and consumer applications.

    Moreover, AI has a long history of producing valuable spin-off technologies. AI researchers tend to look very far ahead, crafting powerful tools to help achieve the daunting tasks of building intelligent systems. Laboratories whose focus was AI first conceived and demonstrated such well-known technologies as the mouse, time-sharing, high-level symbolic programming languages (Lisp, Prolog, Scheme), computer graphics, the graphical user interface (GUI), computer games, the laser printer, object-oriented programming, the personal computer, email, hypertext, symbolic mathematics systems (Macsyma, Mathematica, Maple, Derive), and, most recently, the software agents which are now popular on the World Wide Web. There is every reason to believe that AI will continue to produce such spin-off technologies.

    Intellectually, AI depends on a broad intercourse with computing disciplines and with fields outside computer science, including logic, psychology, linguistics, philosophy, neuroscience, mechanical engineering, statistics, economics, and control theory, among others. This breadth has been necessitated by the grandness of the dual challenges facing AI: creating mechanical intelligence and understanding the information basis of its human counterpart. AI problems are extremely difficult, far more difficult than was imagined when the field was founded. However, as much as AI has borrowed from many fields, it has returned the favor: through its interdisciplinary relationships, AI functions as a channel of ideas between computing and other fields, ideas that have profoundly changed those fields. For example, basic notions of computation such as memory and computational complexity play a critical role in cognitive psychology, and AI theories of knowledge representation and search have reshaped portions of philosophy, linguistics, mechanical engineering and, control theory.

    Historical Perspective

    Early work in AI focused on using cognitive and biological models to simulate and explain human information processing skills, on "logical" systems that perform common-sense and expert reasoning, and on robots that perceive and interact with their environment. This early work was spurred by visionary funding from the Defense Advanced Research Projects Agency (DARPA) and Office of Naval Research (ONR), which began on a large scale in the early 1960's and continues to this day. Basic AI research support from DARPA and ONR -- as well as support from NSF, NIH, AFOSR, NASA, and the U.S. Army beginning in the 1970's -- led to theoretical advances and to practical technologies for solving military, scientific, medical, and industrial information processing problems.

    By the early 1980's an "expert systems" industry had emerged, and Japan and Europe dramatically increased their funding of AI research. In some cases, early expert systems success led to inflated claims and unrealistic expectations: while the technology produced many highly effective systems, it proved very difficult to identify and encode the necessary expertise. The field did not grow as rapidly as investors had been led to expect, and this translated into some temporary disillusionment. AI researchers responded by developing new technologies, including streamlined methods for eliciting expert knowledge, automatic methods for learning and refining knowledge, and common sense knowledge to cover the gaps in expert information. These technologies have given rise to a new generation of expert systems that are easier to develop, maintain, and adapt to changing needs.

    Today developers can build systems that meet the advanced information processing needs of government and industry by choosing from a broad palette of mature technologies. Sophisticated methods for reasoning about uncertainty and for coping with incomplete knowledge have led to more robust diagnostic and planning systems. Hybrid technologies that combine symbolic representations of knowledge with more quantitative representations inspired by biological information processing systems have resulted in more flexible, human-like behavior. AI ideas also have been adopted by other computer scientists -- for example, "data mining," which combines ideas from databases, AI learning, and statistics to yield systems that find interesting patterns in large databases, given only very broad guidelines.

    Case Studies

    The following four case studies highlight application areas where AI technology is having a strong impact on industry and everyday life.

    Authorizing Financial Transactions

    Credit card providers, telephone companies, mortgage lenders, banks, and the U.S. Government employ AI systems to detect fraud and expedite financial transactions, with daily transaction volumes in the billions. These systems first use learning algorithms to construct profiles of customer usage patterns, and then use the resulting profiles to detect unusual patterns and take the appropriate action ( e.g., disable the credit card). Such automated oversight of financial transactions is an important component in achieving a viable basis for electronic commerce.

    Configuring Hardware and Software

    AI systems configure custom computer, communications, and manufacturing systems, guaranteeing the purchaser maximum efficiency and minimum setup time, while providing the seller with superhuman expertise in tracking the rapid technological evolution of system components and specifications. These systems detect order incompletenesses and inconsistencies, employing large bodies of knowledge that describe the complex interactions of system components. Systems currently deployed process billions of dollars of orders annually; the estimated value of the market leader in this area is over a billion dollars.

    Diagnosing and Treating Problems

    Systems that diagnose and treat problems -- whether illnesses in people or problems in hardware and software -- are now in widespread use. Diagnostic systems based on AI technology are being built into photocopiers, computer operating systems, and office automation tools to reduce service calls. Stand-alone units are being used to monitor and control operations in factories and office buildings. AI-based systems assist physicians in many kinds of medical diagnosis, in prescribing treatments, and in monitoring patient responses. Microsoft's Office Assistant, an integral part of every Office 97 application, provides users with customized help by means of decision-theoretic reasoning.

    Scheduling for Manufacturing

    The use of automatic scheduling for manufacturing operations is exploding as manufacturers realize that remaining competitive demands an ever more efficient use of resources. This AI technology -- supporting rapid rescheduling up and down the "supply chain" to respond to changing orders, changing markets, and unexpected events -- has shown itself superior to less adaptable systems based on older technology. This same technology has proven highly effective in other commercial tasks, including job shop scheduling, and assigning airport gates and railway crews. It also has proven highly effective in military settings -- DARPA reported that an AI-based logistics planning tool, DART, pressed into service for operations Desert Shield and Desert Storm, completely repaid its three decades of investment in AI research.

    The Future

    AI began as an attempt to answer some of the most fundamental questions about human existence by understanding the nature of intelligence, but it has grown into a scientific and technological field affecting many aspects of commerce and society.

    Even as AI technology becomes integrated into the fabric of everyday life, AI researchers remain focused on the grand challenges of automating intelligence. Work is progressing on developing systems that converse in natural language, that perceive and respond to their surroundings, and that encode and provide useful access to all of human knowledge and expertise. The pursuit of the ultimate goals of AI -- the design of intelligent artifacts; understanding of human intelligence; abstract understanding of intelligence (possibly superhuman) -- continues to have practical consequences in the form of new industries, enhanced functionality for existing systems, increased productivity in general, and improvements in the quality of life. But the ultimate promises of AI are still decades away, and the necessary advances in knowledge and technology will require a sustained fundamental research effort.

    The Author

    David Waltz is Vice President of the Computer Science Research Division of NEC Research Institute in Princeton, NJ, and Adjunct Professor of Computer Science at Brandeis University in Waltham, MA. From 1984 to 1993, he was Senior Scientist and Director of Advanced Information Systems at Thinking Machines Corporation in Cambridge, MA, where he headed a group that built a large-scale text retrieval system, a web search engine, and data mining systems. At the same time he was Professor of Computer Science at Brandeis. From 1974 to 1983 he was Professor of Electrical and Computer Engineering at the University of Illinois at Urbana-Champaign.

    Waltz received the S.B., S.M., and Ph.D. degrees from MIT. His research, especially his work on constraint propagation and memory-based reasoning, has helped spawn active R&D fields.

    Waltz is President-Elect of the American Association of Artificial Intelligence and was elected a Fellow of AAAI in 1990. He was President of the Association for Computing Machinery (ACM) Special Interest Group on Artificial Intelligence from 1977-79, and served as Executive Editor of Cognitive Science from 1983-86 and as AI Editor for Communications of the ACM from 1981-84. He is a senior member of IEEE.

    Advancing Healthcare through Technology

    The Artificial Intelligence in Medicine (AIM) program is a research program of the Department of Medicine at Cedars-Sinai Medical Center. The AIM program seeks to develop software to allow computers to process and analyze three-dimensional images of the heart in much the same way an experienced human operator would. The program applies artificial intelligence techniques to the measurement of parameters critical to understanding the state and behavior of the human heart. This automated approach allows information to be obtained very quickly that is quantitatively accurate and does not suffer from intra-observer or inter-observer variability.

    The software and algorithms developed by the AIM program are widely considered to represent the gold standard in nuclear cardiology. They can perform a totally automated, push-button sequence of actions relating to imaging processing and analysis.

    Through the AIM program, algorithms have been developed to:

    • Take raw digital data output by the gamma camera, identify where the heart is, reconstruct it into tomographic images and re-orient those images to make them perpendicular to the heart's axis - all without operator interaction.
    • Take tomographic images of the heart, evaluate the signals from several hundred portions of the myocardium, comparing the strength of the signals with those expected in a normal heart and generate an exact quantitative measurement of the location, extent and severity of perfusion abnormalities of the heart. This gives the physician objective information to help select or rule out more invasive treatment.
    • Analyze the dynamic functioning of the heart (i.e., the way it contracts and thickens during its cycle). A dynamic measurement of the heart cavity volume is performed from electrocardiographically gated three-dimensional nuclear cardiology images by automatically identifying the endocardial and epicardial surfaces and following their motion throughout the cardiac cycle.
    • The AIM programs are licensed and distributed by virtually every nuclear medicine camera manufacturer.

      i, robot

      I thought the concept of the storyline was good, as it could be conceived as realistic. Given the ever increasing advances in modern technology, one can, indeed, conceive the possibility of this kind of future occurrence.

      I did not really see any flaws in this movie or in the actor's character but the philosophical aspect of the movie questions at what point does artificial intelligence cease to be artificial and true consciousness arise? Anyhow, I did like the A.I. in this movie and would definitely recommend, especially if you like Will Smith movies are the Terminator series. I do, however, prefer there to be no sequels to this movie due to the fact that a sequel would probably be no more than a revamped version of the first one.

      Funny Artificial Intelligence Robot chats with you in human voice:P

      http://www.cooldictionary.com/splotchy.mpl

      It's really funny program , check it out.

      Wednesday, July 5, 2006

      How brain works?


      Last Wednesday Our lecturer Dr Egerton Senanayake taught us a subject called "effective communication", and he told us to collect information related to "Acquisition knowledge". In order to find those information, I searched on google and found some information related to it. There was a nice article about brain and its functions. I publish a part of the article here with better & clear illustration for you.
      -s.j.jeyaram,02/07/2006.
      Sources: Johns Hopkins Medical Institutions news release, May 23, 2006; and various web sites

      We don't think often about how we've learned to move to pick things and control our gestures. But now, neuroscientists have used a robotic joystick to discover how our brain controls movements. By combining experimental and computational approaches, they found that our brains contain two series of components which learn at different speeds. The first ones learn and forget fast while the second ones learn slowly but have a better 'memory.' These results could be used to rehabilitate "people who have lost motor abilities to brain injuries like strokes." Read more…

      Let's start by looking at the experiments.

      To test the need for time in mastering muscle control, the research team designed a simple and short task. Fourteen healthy human subjects were asked to hold onto a robot-controlled joystick and keep it from moving as the robot driver pushed repeatedly — in quick pulses — to one side. The joystick then pushed repeatedly in the opposite direction and again the subjects were asked to keep the joystick centered.

      The research team found that after all this pushing in different directions the subjects still were inclined to push the joystick in the first direction, even when the joystick was perfectly centered and not moving. Somehow the brain and muscles in the arm had "learned" this simple movement over the course of the experiment, which took only a few minutes, according to the researchers, showing that sleep is not required for learning such simple movements.

      Below is a picture showing how two learning processes (the slow state shown in blue and the fast one in green) "with distinct time courses contribute significantly to short-term motor skill acquisition" (Credit: PLoS Biology and the authors).

      How our brain is learning

      But this illustration tells only one part of the story. Below is another one showing how these learning states evolve with the number of trials (on the horizontal axis). The red curve — which represents the multi-rate model — shows a jump in performance "caused by adaptation rebound in the error-clamp phase" (Credit: PLoS Biology and the authors).

      How our brain is learning

      But how did the researchers reach their conclusions?

      For example, by taking into account the number of repetitions it took for the subjects to push the joystick in the first direction to keep it centered and how long it took for the subjects to "forget" how hard to push the joystick, the predictions suggest that the brain learns muscle control using at least two different steps.

      First, the computer programs were able to tease out that the brain picked up the control task quickly, but actually forgot the task quickly as well. But, at the same time, the brain also was learning the same task more slowly, and that was responsible for the subjects' being able to "remember" the initial joystick-pushing movement.

      This research work was published by the PLoS Biology journal on May 23, 2006. Here is a link to a synopsis of the article named "A New Model of Short-Term Motor Adaptation" from which the first image above was extracted and which begins with the following introduction.

      Starting at around three months old, children can finally reach for the countless toys their parents have been dangling before them since birth. These attempts often involve a good deal of flailing about, as motor skills, like anything else, require cultivation. Motor control depends on executing the proper musculoskeletal force to reach the desired object.

      Prior learning facilitates motor control (all that flailing serves a purpose), which is aided by a fundamental feature of memory, called savings. When a novel response to a stimulus is learned in one set of trials, then "washed out" in an unlearning phase, subsequent relearning proceeds faster. Neuroscientists have been puzzled by savings and other features such as interference and rapid unlearning reported in adaptation studies because standard models of short-term motor adaptation couldn't explain them. But now Maurice Smith, Ali Ghazizadeh, and Reza Shadmehr have solved this puzzle.

      If you're interested by this research, here is a link to the full paper named "Interacting Adaptive Processes with Different Timescales Underlie Short-Term Motor Learning" and from which the second illustration of this post was extracted.

      And what will be the next step for these researchers? They want to discover which parts of the brain are responsible for slow-learning to "tailor therapy strategies to target slow-learning and increase recovery of muscle control after brain injuries."