AI Fluency is the ability to work effectively, efficiently, ethically, and safely with AI as a thinking partner—not just a productivity tool. It's about knowing when to delegate to AI, how to communicate what you need, how to evaluate what it produces, and how to use it responsibly in high-stakes work.
Developed by Ringling College and Cork University Business School, the Framework for AI Fluency identifies four interconnected competencies that enable effective, efficient, ethical, and safe Human-AI interaction:
What: Creative vision and selection of the right AI tools and techniques to realize that vision
The competency: Understanding when and how to use AI tools effectively in creative and problem-solving processes. This involves analyzing tasks, understanding AI platform capabilities, and making informed decisions about when to use AI for automation, augmentation, or independent agent-mediated experiences.
For Starling Institute: Envisioning goals for UN80 analysis, decomposing tasks into AI vs. human components, selecting the right platform for coalition mapping, balancing AI and human capabilities throughout policy work
What: Effectively describing a vision and tasks to prompt useful AI behaviors and outputs
The competency: Communicating ideas, requirements, constraints, and creative visions to AI systems. This encompasses crafting clear, specific prompts using various techniques to guide AI tools in producing desired behaviors and outputs—defining not just what you want, but how you want it created and performed.
For Starling Institute: Product prompting (stakeholder-ready briefings), process prompting (iterative multilateral analysis), performance prompting (diplomatic tone, political sensitivity), translating complex policy requirements into AI-understandable terms
What: Accurately assessing the quality and appropriateness of AI outputs and behaviors
The competency: Critically evaluating AI-generated content and behaviors against project goals, quality standards, ethical considerations, and domain expertise. This includes product discernment (output quality), process discernment (collaboration effectiveness), and performance discernment (AI behavior appropriateness).
For Starling Institute: Fact-checking member state positions, evaluating quality for donor/member state audiences, ensuring AI hasn't missed critical political nuance or diplomatic context, validating outputs meet Starling's reputation standards
What: Ensuring ethical practice, transparency, and accountability in AI use
The competency: Maintaining responsible AI practices throughout the creation, deployment, and use of AI systems. This includes creation diligence (ethical development), deployment diligence (responsible release), and transparency diligence (clear communication about AI use and limitations).
For Starling Institute: Protecting confidential diplomatic information, maintaining credibility with donors and partners, transparent disclosure to stakeholders about AI assistance, ensuring accountability in high-stakes multilateral work
Your 8-person team operates at the center of multilateral reform during the most consequential moment in decades:
The bottleneck isn't ideas or relationships—it's human bandwidth. Every hour on document synthesis is an hour NOT spent on strategy or coalition building.
AI Fluency isn't about replacing your expertise—it's about amplifying your natural edge: political savvy, trusted relationships, and nimble strategic positioning.
Three parallel processes converge in 2025-2026. Your capacity to influence them will determine your impact for the next decade.
| How You Work Now | With AI Fluency |
|---|---|
| Position paper synthesis: 2-3 weeks | Position paper synthesis: 2-3 days |
| White paper production: 3-4 weeks | White paper draft: 48 hours |
| One language at a time | 4+ languages simultaneously |
| Limited rapid response capacity | 24-hour turnaround on breaking developments |
| 20-30 major outputs per year | 60-90 major outputs per year |
Minimum cost: £900K-£1.4M | Bootcamp investment: £4,500 | ROI: 200-310x
If you adopt AI Fluency in early 2025, you establish a 12-18 month lead over peer organizations. By the time they catch up, you'll have:
The question isn't whether AI will reshape multilateral policy work. It's whether Starling Institute will lead that transformation or scramble to catch up.
Duration: 5 hours 15 minutes (one intensive day)
Format: In-person preferred, virtual available
Participants: Full team (8 people)
Investment: £4,500
| Time | Session | Key Activities |
|---|---|---|
| 45 min | Keynote: AI Fluency for Multilateral Impact | Framework introduction, UN80/SG timing context, live demo of AI analyzing member state positions |
| 15-minute break | ||
| 90 min | Working Session 1: Delegation & Description | Platform capabilities workshop, task decomposition exercise, hands-on prompt engineering, build reusable templates |
| 15-minute break | ||
| 90 min | Working Session 2: Discernment & Diligence | Quality evaluation exercises, political nuance checking, ethics & governance workshop, create evaluation checklist |
| 15-minute transition | ||
| 60 min | Agile Sprint: Real Work Application | 4 teams (2 people each) tackle real Starling challenges, apply all 4 Ds, present outputs to leadership |
15-20 reusable templates for UN analysis, stakeholder briefings, coalition mapping—tested and refined
Day-to-day use: Save hours on repeated tasks. When analyzing new member state positions, start with your tested template instead of from scratch. Instantly adapt briefings for different audiences (donors vs. member states vs. media) using audience-specific prompts.
Decision tree for when to use AI vs. human judgment, platform capability map, task decomposition methodology
Day-to-day use: No more guessing "should I use AI for this?" Follow the decision tree to quickly identify which tasks are AI-appropriate (e.g., synthesis, translation) vs. requiring human expertise (e.g., political strategy, relationship management). Speeds up project planning.
Evaluation checklist for AI outputs, red flags guide, validation workflow for mission-critical work
Day-to-day use: Before sending any AI-assisted work to stakeholders, run it through your checklist: factual accuracy verified? Political nuance preserved? Diplomatic tone appropriate? Catches errors before they damage credibility. Standardizes quality across the team.
Draft ethical guidelines, confidentiality protocols, transparency standards, stakeholder disclosure framework
Day-to-day use: Clear rules on what data goes into AI (never: confidential meeting notes; okay: public statements). Know exactly how to disclose AI assistance to donors. Protects organizational reputation and maintains trust with sensitive partners.
Four teams will produce tangible deliverables ready for real use:
Included: 2 weeks email support for technical questions and troubleshooting
Optional add-ons:
Immediate (Day of Bootcamp):
30 Days Post-Bootcamp:
Priscila brings 20 years of experience building AI and innovation systems that scale in complex, real-world environments—from global food systems to government policy to Antarctic science.
Contact: priscila@innova10x.com
Schedule Your Bootcamp →