Summary
“AI-Augmented Software Engineering” is a two-day workshop for experienced developers who want to understand how AI coding assistants actually work, where they help, and where they don’t. We’ll go beyond autocomplete tricks and into the underlying mechanics of model context, retrieval, and interaction patterns.
You’ll leave with a grounded, evidence-based understanding of how to use AI tools effectively in complex systems. We’ll look at case studies, failure modes, and practical experiments that clarify what’s going on “under the hood” and how to integrate assistants into an existing engineering workflow without sacrificing rigor, control, or clarity.
This is a workshop for engineers who like to know why things work, and who are curious about how to make AI a transparent and reliable collaborator rather than a mysterious one.
Learning Objectives
- How AI coding assistants represent and process context — and what that means for large, real-world projects.
- How to reason about model behavior, retrieval, and prompting from first principles.
- The taxonomy of model interactions (crafting, consuming, and surrounding skills).
- How to systematically debug, refactor, and test with AI while controlling for drift and hallucination.
- Patterns for building reproducible AI workflows and documenting augmented development practices.
- Evaluation frameworks to measure when and how AI is actually improving your engineering throughput.
Outline and Agenda
Day 1: Foundations of AI-Augmented Engineering
Morning:
- Lecture: Describe the three model-crafting categories with which engineers interact.
- Exercise: Apply those categories to your own workflows.
- Lecture: The broad skill categories beyond code completion.
- Exercise: Identify where your current practice fits.
- Lecture: Energy, compute, and resource costs in model consumption.
- Exercise: Estimate tradeoffs in applied assistant scenarios.
- Lecture: What AI coding assistants can and can’t access.
- Exercise: Probe context boundaries with real tools.
Afternoon:
- Lecture: Anatomy of a prompt and prompt engineering patterns.
- Exercise: Compare prompt structures and outcomes.
- Lecture: Context windows, token limits, and context management.
- Exercise: Apply context management strategies to refactoring tasks.
- Lecture: Retrieval-Augmented Generation (RAG) theory and use cases.
- Exercise: Add external docs to improve assistant performance.
- Lecture: Model Context Protocol (MCP) and local integrations.
- Exercise: Connect an MCP tool to an example workspace.
Day 2: Applied Practices and Evaluation
Morning:
- Lecture: Privacy and impact considerations for AI-assisted work.
- Exercise: Analyze and mitigate privacy risks.
- Lecture: Preparing context documents for legacy codebases.
- Exercise: Craft and test context summaries for an example repo.
- Lecture: Automated test generation with AI.
- Exercise: Implement unit tests from spec comments.
- Lecture: Verification frameworks for AI outputs.
- Exercise: Build and use accuracy and progress checks in prompt chains.
Afternoon:
- Lecture: Refactoring code for clarity and context retention.
- Exercise: Restructure a module for assistant readability.
- Lecture: Recovering from AI mistakes and drift.
- Exercise: Diagnose and repair a misled generation.
- Lecture: Organizing code assistant documentation for team use
- Exercise: Prepare internal documentation that your team and their assistants can use on your repo
- Remainder: Q&A, specific cases, and troubleshooting
For who?
This workshop is for professional software engineers, staff engineers, and technical leads who work on large or complex projects and want to understand, technically and pragmatically, how to integrate AI coding tools. It’s designed for folks who are skeptical but curious: you’ll see real demonstrations, measurable results, and open discussion about when these tools help and when they don’t.
Requirements
Skills: Proficiency in at least one general-purpose programming language (e.g. Python, Rust, Java, or C++). Familiarity with version control and large project structure.
Setup:
- A laptop with Git and VS Code installed.
- An active GitHub account with access to GitHub Copilot or equivalent AI assistant (e.g. Cursor, Claude, Codex). Demonstrations are likely to use Claude.
- Internet access and a modern browser (We'll work to have good wifi at the venue).
- Optional: Access to a sample or personal project repo for exercises.
Testimonial
Chelsea is one of the most engaging programming teachers I've learned from in recent memory.