Introduction: understanding the problem
In today’s fast-evolving landscape of AI, newsrooms like Bay City News are asking a critical question: how can we leverage these tools to optimize workflows without compromising on accuracy and trust?
At Bay City News, the Datebook is a vital resource — a twice-daily calendar of government meetings, press conferences, and public events used by journalists across the Bay Area. Managed by Kathy Pfrommer, the process of building it requires manually reviewing dozens of public meeting agendas, each published in different formats, with inconsistent links, contacts, and structures.
This manual work is time-intensive and repetitive, limiting opportunities to focus on higher-value reporting. Our goal was to explore how AI tools could automate some of these pain points, augmenting the workflow while maintaining human oversight for accuracy. This playbook serves as a guide for future team members to replicate and build upon this work.
Meet the core team behind this project

Katie Eritano, 2025 Stanford Rebele Product Intern

Kathy Pfrommer, Datebook Calendar Editor
Overview of the workflow

The core of this project is a Gemini AI assistant designed to read meeting agenda files and create short-form Datebook entries. The workflow we tested combines manual agenda collection, AI-assisted parsing, and templated Datebook entry generation.
Inputs: Agenda PDFs (from city, county, and agency websites)
Tools: Google’s Gemini Assistant (custom “Gem”), Datebook Templates file
Outputs:
- Summarized agenda topics (with page + paragraph references)
- Optionally, draft Datebook entries in standard format
Step 1: Instructing the assistant

For ease of use the assistant is created with a specific set of instructions to ensure consistent, accurate output. The “Datebook Templates” document is a compilation of templates hand-curated by our Datebook team written specifically for the public agencies we monitor on a regular basis. An example of those templates is shown in the image below. The other document is a word for word copy of the instructions we used to curate our assistant. The language and examples are specific to our agency’s needs and practices. The following is more information on the logic behind the instructions to help guide those looking to curate a tool for their specific needs.

Role definition & context
“I’m Kathy, the Datebook Manager at Bay City News. You are my assistant…”
What it does:
Defines who the user is (Kathy, Datebook Manager) and who the assistant is (an AI subordinate).
Best practice:
Always start AI instructions with role + relationship + field of work. It shapes tone and task fidelity.
Why it matters:
Anchors the AI’s perspective (assistant role, not a peer or an independent actor).
Provides real-world newsroom context: the field is journalism, the task is the Datebook.
Reduces hallucination risk by framing every task around Kathy’s needs, not abstract AI answers.
Task overview
“…I will provide you with public agency meeting agenda files…”
What it does:
Clarifies the two main input types:
- Check for specific topics/keywords.
- Provide a list of key topics.
Why it matters:
Separates workflows into clear branches.
Ensures the AI knows what to do based on query type.
Prevents confusion between “search” vs. “summarize.”
Response rules
What it does:
Breaks down outputs into conditional rules:
- If topic found → “Yes” + summary + offer to draft Datebook entry.
- If topic not found → “No” + alternative relevant topics.
- If asked for list → return list of topics + summaries.
Why it matters:
Prevents vague or meandering answers.
Standardizes tone (“Yes/No” upfront)
Adds a feedback loop (“Would you like me to write a Datebook entry?”) that guides the workflow instead of forcing manaul prompting
Best practice:
Explicit branching logic reduces errors and produces newsroom-ready outputs.
Evidence & transparency
“…Include: Page number, Paragraph number…”
What it does:
Forces every claim to be tied to the source PDF.
Why it matters:
Strengthens editorial trust: editors can quickly verify content.
Reduces liability of inaccurate summaries.
Aligns with journalistic standards of sourcing and transparency.
Datebook entry format
What it does:
Provides concrete examples (Richmond, Sacramento, Oakland).
Key inclusions required in every entry:
- Meeting time/date
- Key agenda items
- In-person location
- Watch/participation info
- Contact details
Why it matters:
Models show exact tone, length, and formatting expectations.
Enforces ALL CAPS for headers and ≤70 words for brevity.
Shows the assistant how to balance detail (agenda items) with constraints (2 sentences max).
Use of templates
“…Always use the Datebook Templates file… Only override if agenda has different Zoom info…”
What it does:
Establishes templates as the default source of truth.
Why it matters:
Prevents hallucination and ensures consistency across agencies.
Solves the “changing Zoom link” problem noted in your reflection.
Encourages conservative overrides: only change when evidence exists
Tone & style rules
“…Maintain a formal, factual, non-speculative tone. Do not use creative phrasing…”
What it does:
Outlines newsroom-appropriate voice: concise, neutral, evidence-based.
Why it matters:
Journalists must trust the tool won’t add flair or editorialize.
Prevents “creative” AI errors that could distort facts.
Keeps outputs aligned with professional Datebook standards.
Why these components are important
Together, the structure does three big things:
- Anchors the AI in Kathy’s real-world role and workflow.
- Standardizes outputs for consistency, accuracy, and easy verification.
- Builds guardrails against errors that could erode newsroom trust.
It’s a great example of prompt-as-policy: you’re not just telling the AI what to do, you’re embedding newsroom values (accuracy, sourcing, brevity, neutrality) directly into the workflow.
Step 2: Collecting agenda files
After the gem has been created, we now move to integrate it into our workflows.
Past process:
Kathy manually scans inboxes, monitors websites, and downloads agendas.
Each agency has its own format: PDFs, embedded links, or dynamically generated pages.
Challenges:
No standard format across agencies.
Important details often hidden in attachments or embedded links.
AI tools cannot yet automatically fetch or follow embedded links.
Current recommendation:
Continue manual downloads for now.
Where possible, coordinate with Agenda Watch (Big Local News project at Stanford University) for scraping support.
Prepare PDFs for upload into Gemini Assistant by labeling file with clear naming convention.
Step 3: Processing agendas
- Upload the PDF agenda.
- Ask one of two questions:
- Keyword search: “Does this agenda mention homelessness?”
- Topic extraction: “What are the key topics in this agenda?”
- Gemini returns:
- If keyword found: “Yes” + one-sentence summary + page/paragraph reference.
- If not found: “No” + list of other relevant topics.
- Gemini asks: “Would you like me to write a Datebook entry from this information?”

Challenges & limitations
- Contact information risk: Gemini sometimes overrides templates incorrectly.
- Embedded links: AI cannot parse linked attachments (often where the real substance is).
- Manual steps remain: Kathy still downloads files individually and uploads them to Gemini.
- Training time: Users must practice refining prompts for best results.


Resources & templates
- Datebook Templates file (agency names, addresses, default watch links, contacts)
- Example Gemini Prompts:
- “List key topics in this agenda with page + paragraph references.”
- “Does this agenda mention homelessness? If yes, summarize in one sentence with citation.”
- “Write a Datebook entry from this information.”
Postscript: Reflection & recommendations
Working on this project taught me how human oversight and AI complement each other. Experiencing both the capabilities and limitations of AI tools in a journalistic context felt groundbreaking, especially as newsrooms continue to debate whether AI has a place in their work. While the assistant can handle the heavy lifting of parsing documents, it requires careful prompt engineering. It is sensitive to formatting, and a simple change like removing indentations can cause it to stop following instructions accurately.
Key takeaways:
- Prompting: The assistant’s performance is highly dependent on the precision of the prompt’s language and formatting. It’s crucial to be explicit about desired output, for example, specifying “page and page paragraph” numbers to avoid ambiguity.
- Human Oversight: The assistant did not eliminate the need for human judgment. Instead, it offered a faster way to surface items, allowing human curators like the Datebook Manager to focus on analysis rather than data entry.
- Iterative Design: This project serves as a starting point. A future version of the assistant could be integrated with Gemini’s API for bulk uploads or paired with an automated scraper like “Agenda Watch” to create a more complete data pipeline.
This playbook is a foundation for future interns and teammates to build on, adapting as tools evolve.
The post Datebook Assistant Playbook: a guide to AI-assisted agenda review for Bay City News appeared first on Local News Matters.