MIT 6.858 Computer Systems Security - Lec 1

Lecture 1 - Introduction


0x01. Core Concepts of Security

Security means a system achieves its goals despite adversaries. Requires a systematic plan.

High-Level Framework

  • Goal: What the system aims to achieve (negative property: no bypasses).
    • Types:
      • Confidentiality: Data secret (e.g., only Alice reads file F)
      • Integrity: No tampering (e.g., adversary cannot corrupt system state)
      • Availability: System continues working (e.g., resists DoS)
  • Threat Model: Assumptions about adversary capabilities (e.g., can guess passwords, cannot steal server).
  • Implementation: Layered.
    • Policy: Configuration rules (e.g., file permissions, password + 2FA).
    • Mechanism: Enforces rules (e.g., user accounts, encryption); upper-layer mechanism can be lower-layer policy.
    • Policy may include human elements (e.g., no password sharing), but outside mechanism scope.
  • Security Definition: Goal + threat model define security; implementation (policy + mechanism) attempts to achieve it.
  • Negative Goal Challenge: Easy to implement positives (e.g., TA accesses grades), but must block all attack paths.
    • Example: 6.566 grades file (AFS server); policy: only TAs read/write.
    • Attack brainstorm: Code bugs, guess passwords, steal laptop, intercept network to registrar, break encryption, trick TA, get registrar job, become TA.

0x02. Challenges in Building Secure Sys

  • Why Hard? Negative goals require considering countless attacks; first designs often fail.
  • Iteration Essential: Design → Observe attacks → Update threats/policies.
  • Learn from: Vulnerability databases, bounty programs, post-mortems. Threat models evolve (e.g., computational power, information availability).
  • Defender Disadvantage: Limited resources, balance usability; attackers need one weakness.
  • Determined Attackers Often Win: Need defense in depth + recovery (e.g., secure backups).
  • No Perfect Security: Not required; economic perspective:
    • Attack cost > system value (deterrence).
    • Defense cost < system value (feasible).
    • Make system less attractive than others (e.g., spam generation).
  • High-payoff techniques: Eliminate attack classes (e.g., attacks from 10 years ago now ineffective).
  • Security sometimes increases value: (e.g., VPN for remote work, JS sandbox for unknown code).
  • Similar to physical security: Cost/deterrence; but computer attacks are cheap.

0x03. Examples of Security Failures

Categorized: Goals/policies, threat models, mechanisms/bugs, combinations. Many from production/research cases.

4.1 Goal/Policy Issues (System Enforces Policy, But Policy Inadequate)

  • Business-Class Airfare: Allows anytime changes, even after boarding → Perpetual flight.
    Lesson: Corner cases matter; may require architecture changes (e.g., gate updates).
  • TLS Certificate Domain Verification: CA uses OCR for .eu domain contacts; ambiguous images misread → Fake certificates.
    Ref: https://www.mail-archive.com/dev-security-policy@lists.mozilla.org/msg04654.html
  • Fairfax School System: Teachers add students/change passwords; student adds superintendent as “student” → Full access.
    Policy equates to “teachers omnipotent.”
    Lesson: Clear security goals, separate from app logic.
    Ref: https://catless.ncl.ac.uk/Risks/26.02.html#subj7.1
  • Sarah Palin Email: Login with password or security questions; questions guessable (high school/birthday).
    Policy equates to “or” not “if forgotten.”
    Ref: https://en.wikipedia.org/wiki/Sarah_Palin_email_hack
  • Mat Honan Accounts: Amazon adds card without auth, changes email with card verification → Gets last 4 digits → Apple reset → Gmail reset.
    Lessons: Combine trivia; big sites verify “original creator.”
    Ref: https://www.wired.com/gadgetlab/2012/08/apple-amazon-mat-honan-hacking/all/
  • Account Lifetime: Email reuse → Other systems assume same owner.
    Ref: https://www.gruss.cc/files/uafmail.pdf
  • Insecure Defaults: Router default passwords; AWS S3 world-readable → Data leaks.
    Lesson: Negative goals need secure defaults; many components, easy to forget config.
  • Management/Maintenance Pitfalls: Who changes permissions/passwords, accesses logs/backups, controls upgrades/config, manages servers, revokes ex-user privileges?

4.2 Threat Model/Assumption Issues (Attack Feasible But Not Considered)

Lessons: Explicit assumptions; simple/general models; designs reducing assumptions; defense in depth; learn from cases.

4.3 Mechanism Issues (Bugs Undermine Security)

4.4 Combination Issues


0x04. How to Build Secure Systems

  • Isolation: Security starting point. By default, activity X cannot affect Y (even malicious/buggy).

    • Types: Hardware (processes/containers/VMs), Software (JS/WebAssembly), Physical (USB keys).
    • Enforcement: Host (OS kernel/runtime/physics).
    • Next few lectures cover.
  • Controlled Sharing: Not 100% isolation; model:

1
2
3
4
5
6
7
8
9
10
11
12
13
+-----------------------+
| Policy |
| | |
request | V |
principal --> GUARD --> resource |
| | |
| +-----------+ |
| | | | |
| | V | |
| | Audit log | |
| +-----------+ |
+-----------------------+
HOST enforcing isolation
  • Guard: Authenticate (principal), Authorize (rights), Audit (isolated log for recovery).

  • Principals: People/devices/programs/services; Resources: Files/services/accounts.

  • Privilege Separation: Limit damage from buggy/malicious components.

  • Challenges: Functional system + security guarantees + performance.

  • Second module: Case studies.

  • Software Security:

  • Bug Handling: Runtime defenses/testing/finding/verification.

  • Supply Chain: Dependencies/deterministic builds.

  • Backdoors: Reviews/approvals/audits.

  • Deployment: Systematic enforcement.

  • Third module covers.

  • Distributed Systems: Network/Internet new threats.

  • Big Ideas: Crypto/certificates/trust.

  • Fourth module