How to Run an AI Hackathon That Actually Ships
Why governance, ownership, and operational alignment matter before the event even starts.
DEAR STAGE 2: We want to run an internal AI hackathon to surface support and CS use cases, but we’re worried the ideas will die without buy-in from IT, security, and data teams. How do we set this up so it actually leads to execution? ~HACKATHON TO EXECUTION GAP
DEAR HACKATHON TO EXECUTION GAP: Running a hackathon is a great idea in theory, but turning the winning ideas into something that changes how your team works is where things break down.
We recently brought together a group of Stage 2 LPs, including Jeb Dasteel, former Chief Customer Officer at Oracle, David Hwang, CCO and Interim CRO at Grammarly, and Colin Murphy, CCO at Zendesk to dig into this exact challenge. Below is a summary of their collective takeaways. Their read: these projects fail because they weren’t designed or scoped to survive governance, resourcing, and cross-functional realities.
If you want your hackathon to produce something that actually ships, you have to build the operating model before you build the prototype.
Start with the business outcome, not the experiment
A common pattern: teams come out of a hackathon with great ideas, then realize they don’t have the business units they need (IT, data science, security) to actually unlock them. The energy dissipates, and the idea dies in a backlog somewhere. Getting those stakeholders into the conversation early, before the hackathon, not after, is what separates the experiments that ship from the ones that don’t.
Match oversight to risk
Not every AI initiative requires the same level of governance. Internal workflow automation moves much faster because the exposure is lower. Once automation touches customers directly, the oversight model changes.
Separate internal productivity use cases from customer-facing automation. Internal use cases often include knowledge retrieval, drafting communications, or demo preparation. These create quick wins and build momentum. Customer-facing automation requires deeper collaboration with security and data teams.
If governance enters the conversation after the hackathon, it feels like friction. If governance is built into the evaluation criteria from the start, it becomes part of the design.
Pressure test your foundation before you automate it
Many teams assume their documentation is ready, then discover it isn’t updated, isn’t structured, and is full of assumptions that were never written down.
Preparing for AI-powered onboarding automation often means going back and designing a prescriptive experience from scratch. That means defining the journey, creating the content, and filling in the data gaps before any automation is possible.
Many teams discover that the real work is not building the AI layer. It is cleaning up what sits underneath it. If that’s the case, take a beat to do this foundational work first.
Plan for maintenance, not just momentum
The build versus buy discussion is different today than it was even a year ago. Tools evolve quickly, and internal teams are more capable than ever. But the real question is not whether you can build something. The question is whether you are prepared to support it long term.
After the hackathon, assign a clear owner and ask the hard resourcing questions:
Who is going to maintain this?
Does your team have the capacity to keep up with it as the technology keeps moving?
This was the make-or-break factor that came up again and again - not whether you can get something built, but whether you have the people to sustain it. If you don’t have a clear answer, that’s worth resolving before you commit.
Until next week!



