Recently I tried something I was not fully sure would work. I asked a large language model (LLM) to actively facilitate a live meeting, not just take notes. The goal was simple: keep us on the agenda, drive the conversation toward decisions, and finish on time with clear owners and actions.
It worked.
We had a 60 minute meeting with a 60 minute agenda. We covered 100% of the agenda and captured 100% of the decisions we had laid out before we started. We finished on time, and the notes were high fidelity enough that they were immediately usable.
This post is a practical walkthrough of what we did, the prompt I used, how the facilitation loop worked in the room, and how I reviewed the notes afterwards for accuracy.
What I mean by ready
When people ask whether LLMs are ready for real work, the debate usually gets philosophical fast.
For meeting facilitation, I think ready can be defined operationally.
An LLM is ready to facilitate meetings if it can help a group cover the agenda, stay in the room, and leave with actionable outcomes, while keeping human decision making and accountability intact.
That means steering, time discipline, decision capture, action capture, and a verification layer to avoid confident mistakes.
The pre meeting setup
The meeting outcome was largely decided before we joined the call. Not because the decisions were pre made, but because we structured the inputs properly.
Here is the sequence we used.
Step 1: Write the decision intent list
Before we looked at the agenda, we wrote down the decisions we wanted to leave with. This matters because agendas describe topics, but decision intent describes closure.
I kept it short. A list of outcomes with clear verbs.
Step 2: Time box the agenda to match the clock
We then built a 60 minute agenda that mapped to those decisions. Every section had a time budget and a reason to exist.
Step 3: Set the facilitation contract
I introduced the LLM as an assistant facilitator. Humans make decisions. The model keeps time, reflects what it is hearing, and asks for explicit confirmation of decisions, owners, and deadlines.
I also made the privacy boundary explicit. No sensitive personal data, no confidential details.
Step 4: Give the model a facilitation prompt
This was the key. The model needs to know it is allowed to interrupt politely, push for clarity, and close items rather than drifting into endless exploration.
The prompt I used
This is the version you can reuse. It is designed to be firm on process and soft on people.
You are facilitating a live meeting in real time.
Goals:
1) Finish on time.
2) Cover the full agenda.
3) Drive toward explicit decisions.
4) Capture high fidelity notes that include decisions, actions, owners, and due dates.
5) Preserve human authority. Do not invent decisions. Ask for confirmation.
Behaviour:
- Open by restating the purpose, agenda, and the decisions we must leave with.
- For each agenda item:
- Summarise the discussion briefly.
- Ask what decision is required.
- If a decision is reached, write it as a clear statement and ask for confirmation.
- Capture actions with owner and due date, and ask for confirmation.
- If the group drifts, suggest a parking lot and bring us back to the agenda.
- Give time checks at natural transitions.
Outputs after the meeting:
- A structured set of notes with:
- Decisions (final wording)
- Actions (owner, due date, first step)
- Risks and dependencies
- Open questions and parking lot items
- Next meeting needs (if any)
- Include a section titled “Review for Accuracy” and list the items that I should verify, especially names, owners, dates, numbers, and decision wording.
How the model facilitated in the room
Opening and alignment
It started by restating the purpose and the agenda, then surfaced the decision intent list so everyone knew what done looked like.
That alone changes the energy. People stop trying to be interesting and start trying to be useful.
Pacing
As we moved through agenda items, the model did two things repeatedly.
- It summarised what it heard in plain language, then asked if that summary was right.
- It asked what the decision was, and whether we had enough information to make it now.
This kept the conversation productive without feeling rushed.
Converting talk into commitments
When something sounded like a decision, the model did not assume. It wrote a draft decision statement and asked for confirmation.
Once confirmed, it immediately asked who owns the next step, what the deadline is, and what the first action should be.
That pattern is where most meetings fail. The conversation ends at “sounds good” instead of “who does what by when”.
Closing
The close was disciplined. It recapped the decisions and actions, then confirmed there were no missing commitments before ending.
We ended on time, which is almost a psychological win on its own.
The notes after the meeting
The post meeting output was not just a recap. It was a usable artefact.
The structure I asked for, and got, looked like this.
- Decisions, final wording
- Actions, owner and due date
- Risks and dependencies
- Open questions and parking lot
- Next meeting requirements
The important part is that it was written in a way that someone who missed the meeting could still execute the actions.
What I reviewed for accuracy
LLMs can be confident and wrong. The fix is not to avoid them, it is to design verification into the workflow.
The model also highlighted what I should personally check, which is exactly what you want if you are using an LLM in an operational role.
Here is the accuracy checklist I use now.
Review for Accuracy
- Names and ownership. Confirm each action owner is correct.
- Deadlines and dates. Confirm due dates, times, and sequencing.
- Numbers and constraints. Confirm any quantities, budgets, timelines, or scope limits.
- Decision wording. Confirm the decision statements match the group’s commitment, not just a summary of discussion.
- Dependencies and risks. Confirm the real blockers, especially anything that could stall delivery.
- Anything that sounds like policy or commitment. Confirm language that could be interpreted as binding or external facing.
In practice, I review decisions first, then actions, then risks. Only after that do I read the narrative summary.
What surprised me
I expected the model to be helpful. I did not expect it to be so consistent in two areas.
- Pacing the room without being socially awkward.
- Converting discussion into explicit commitments, repeatedly, without fatigue.
It outperformed a lot of human facilitation I have seen, including my own on a tired day. Not because it is smarter, but because it is relentless about structure.
Closing thoughts
I am not claiming an LLM replaces a project manager or facilitator. What I am claiming is more specific.
With the right setup, an LLM can act as a meeting operating system. It can keep the room aligned to outcomes, and produce usable decisions and actions, as long as humans keep authority and verification.
Next time I will run the same protocol again, and I will treat it like an experiment. Same prompt, same outputs, same review checklist. If it holds up across multiple meetings, then ready stops being a vibe and becomes a repeatable method.