Note
These notes are taken after the fact based on the slides. Anything mentioned only in lecture is not included. Covers the entire Week 1 AI introduction slideshow.
Introduction to Artificial Intelligence
What is AI?
- Study of building intelligent entities: machines that act effectively & safely across diverse novel situations
- Fastest-growing field; wide applications; still much room for contributions
- AI spans:
- General: learning, reasoning, perception
- Specific: chess, theorem proving, self-driving cars, medical diagnosis
- Universal field: relevant to any intellectual task
AI vs Machine Learning
- Machine learning (ML): subfield of AI that improves performance from experience
- Confusion in public use — ML is only part of AI
- AI systems may use ML, but not all do
- Two views:
- Human-centered: AI as human-like intelligence
- Rationalist: AI as “doing the right thing” (rationality)
Russell & Norvig advocate: AI = acting rationally
Four Categories of AI
AI can be classified by two dimensions: human vs rational and thought vs behavior:
Human | Rational | |
---|---|---|
Think | Think like humans (cognitive modeling) | Think rationally (laws of thought) |
Act | Act like humans (Turing test) | Act rationally (rational agent) |
Acting Humanly – The Turing Test
- Turing (1950): “Can machines think?”
- Proposed the Imitation Game:
- If a computer can fool a human interrogator into thinking it is human, it demonstrates intelligence
- Capabilities needed to pass:
- Natural language processing
- Knowledge representation
- Automated reasoning
- Machine learning
- Total Turing test adds:
- Computer vision
- Robotics
Example: ELIZA chatbot — worked via simple syntactic tricks, not deep understanding.
Thinking Humanly – Cognitive Modeling
- To claim a program thinks like a human, we must know how humans think:
- Introspection (self-observation)
- Psychological experiments
- Brain imaging
- Requires scientific theories of internal brain activity
- Validated through prediction + testing or neurological evidence
- Overlaps with cognitive science and cognitive neuroscience
Example: ML + brain imaging used to approximate “mind reading”
Thinking Rationally – Laws of Thought
- Originates with Aristotle (“right thinking”)
- Formal logic → rules of reasoning
- Example: All men are mortal; Socrates is a man → Socrates is mortal
- Problems:
- Not all intelligent behavior involves logical deliberation
- Logic requires certainty about the world (rare)
- Probability theory helps with uncertainty
- Rational thought ≠ rational behavior — need theory of rational action
Acting Rationally – Rational Agents
- Rational behavior: doing the right thing to maximize goal achievement given available info
- Not always about “thinking” (reflexes can be rational if optimal)
- Rational agent:
- Acts autonomously
- Perceives environment
- Persists over time
- Adapts to change
- Pursues goals
- AI research goal = design rational agents
flowchart TD P[Percepts / History] --> F[f: P* → A] F --> A[Actions]
Limited Rationality
- Perfect rationality = impossible (computational limits)
- Instead: bounded rationality
- Act “well enough” under time/knowledge constraints
- Still, perfect rationality serves as a useful benchmark for theory
Beneficial Machines & Value Alignment
- Standard model assumes objectives are fully specified
- Works in artificial tasks (chess, shortest path)
- But in real-world tasks, defining objectives is hard:
- Example: self-driving cars
- Goal: reach destination safely
- Perfect safety → never leave garage
- Must balance progress vs. risk
- Example: self-driving cars
- Value alignment problem: objectives given to AI must match true human value
- Risks:
- Misaligned AI might pursue objectives dangerously (e.g., bribe opponent in chess if “winning” is sole goal)
- We want machines that are cautious, ask permission, defer to humans
Other Definitions of AI
- Dean, Allen & Aloimonos: flexible programs responding productively in unanticipated situations
- Winston: computations that perceive, reason, and act effectively in uncertain environments
Goals of AI
- Engineering: solve real-world problems with knowledge & reasoning
- Focus on higher-level design & intelligent software entities
- Science: use computers to study intelligence itself
- Test theories of human intelligence by implementing them in code
Perspectives on AI
- Computer science: building theories/programs to solve problems
- Cognitive science: simulate neurology and human cognition
- Psychology: human intelligence studies
- Philosophy: reasoning about perception, learning, memory
Splinter Fields of AI
- Computer vision
- Theorem proving / symbolic computation
- Logic programming
- Natural language understanding
- Robotics
- Data mining
- Machine learning
- Neural networks
- Evolutionary computation/robotics
- Swarm intelligence
- Deep learning
- Reinforcement learning
- Large language models
Evolution of AI
- AI is active, evolving, with conferences & journals
- Phenomenon: once a problem is solved, it often “leaves AI” and becomes mainstream computer science
- Examples: chess playing, OOP, theorem proving, pattern recognition
Advantages of Implementing Intelligence on Computers
- Problem-solving via computation
- Links to tractability, complexity, PL paradigms
- Precision
- Programs must be unambiguous
- Measurement
- Enables empirical analysis
- Computers as guinea pigs
- Ethical way to experiment with “minds”
State of the Art
- Current frontier: covered later in class
- Focus areas include:
- Deep learning
- Reinforcement learning
- Large-scale perception & reasoning systems