Future-Proofing Congress: Deciphering the AI Alphabet Soup
TIL how to decode the technology behind AI acronyms
The Hill is renowned for its acronyms. During my first couple weeks as a junior staffer, overcoming the knowledge gap of learning Congress’ alphabet soup (from CRS, GAO, and CBO to committee name shorthand) felt like slogging through mud. For better or worse, Congress is a workplace where staff need to constantly build their expertise – particularly to keep up with emerging technologies across industries. Although you are no doubt an expert on the House’s and Senate’s internal lingo at this point, the newest alphabet soup to learn is that of artificial intelligence.
I’ll go into more detail below on the top AI-related acronyms.
Also in this month’s newsletter:
Innovation is beautiful: GAO STAA’s new website
A living exchange of international institutional AI ideas
The cost (and benefit) of being polite to your favorite LLM
Exploring product model funding for government technology
International fix: the UK’s Lawmaker platform
Stay in the game with an AI personal trainer
Warmly,
Aubrey Wilson
Director of Government Innovation
POPVOX Foundation
Top 10 AI Acronyms
Here are ten AI-related acronyms that are likely to cross your desk (if they have not already), brief explainers of what each one means, as well as when and why they may come up:
AGI - Artificial General Intelligence refers to currently hypothetical AI systems that would possess human-like general intelligence and the ability to understand, learn, and apply knowledge across a huge variety of topics. Unlike today's limited AI systems like large language models (LLMs) or Computer Vision (CV) models (defined below) that excel at specific tasks, AGI is discussed in the context of future systems that could potentially match or exceed human intellectual capabilities.
CV - Computer Vision is the field of AI that enables computers to interpret and understand visual information, such as images or videos. While natural language processing (NLP) focuses on processing text and language, CV addresses the challenge of computers “seeing” and understanding visual data, often using deep learning (DL) architectures similar to those powering LLMs.
DL - Deep Learning is a specialized form of machine learning (ML) that uses neural networks with many layers to process complex information and recognize patterns. This approach has revolutionized both NLP and CV by enabling far more sophisticated capabilities than traditional Machine Learning methods, and serves as the foundation for modern LLMs like generative pre-trained transformer (GPT) and other generative AI systems.
GPT - Generative Pre-trained Transformer refers to a specific type of AI model that is pre-trained on a large dataset and, as a result, generates contextually relevant information based on the input it receives. GPT models are a type of NLP LLM, but “GPT” specifically refers to the transformer architecture.
LLM - Large Language Models are AI systems trained on vast amounts of text-based data and generate human-like text-based responses based on the patterns they have learned. Unlike retrieval-augmented generation (RAG) systems that can access external and/or up-to-date information, traditional LLMs are limited to knowledge contained in their training data.
ML - Machine Learning is a type of AI that enables computers to learn from data and improve their performance without being explicitly programmed by a human. This technology is the foundation for modern AI tools, from DL methods used in CV and NLP, to the algorithms that are the power RL systems.
NLP - Natural Language Processing is how computers understand, interpret, and generate human language in useful ways. Today's LLMs use DL techniques to improve computers' NLP abilities.
RAG - Retrieval-Augmented Generation is an AI approach that enhances language models by retrieving relevant information from external sources before generating responses. Unlike standard LLMs that can only access information from their training data, RAG systems overcome knowledge limitations by connecting to external databases, making them especially valuable when accurate, up-to-date information is crucial.
RLHF - Reinforcement Learning from Human Feedback is a technique to improve AI systems by having human evaluators rate different AI outputs, teaching the system which responses the end users (humans) prefer. This approach has become an essential practice for aligning commercially available LLMs (like ChatGPT and Claude) with human values. RLHF practices often result in LLMs generating content that is more helpful and less controversial or harmful than models trained solely on predicting text.
Bonus term: Neural Network: A neural network is a computing system made up of interconnected artificial neurons – inspired by the structure of the human brain – that work together to recognize patterns in data. The development of this technology has been foundational for the creation of both simple ML applications and sophisticated DL systems like LLMs and CV.
This list of AI acronyms is just the start of new terminology that may come up in your conversations. For a more comprehensive list, turn to your favorite, institution-approved LLM and prompt “What are a list of AI-related acronyms or terms that I should be aware of as a Congressional policy staffer? For each on the list, please include a two-sentence explainer that defines the term and provides context for why, when, or how the term may be relevant.”
Happy prompting!
Innovation is Beautiful: GAO’s STAA’s New Website
The Government Accountability Office’s Science, Technology Assessment and Analytics Team has given its Innovation Lab website a facelift. Visiting the website not only provides the opportunity to experience how modern government websites can feel, but is also a great excuse to hone your knowledge of what STAA has been up to.
As a reminder, the Innovation Lab is a group of individuals within GAO tasked with exploring new technologies and how to use them to improve GAO’s functionality internally. Many of the projects that the Lab undertakes have use-case potential for Congress more broadly, even if only as a proof of concept. Take its internally built LLM or their ongoing work to use natural language processing (NLP) and a large language model (LLM) to identify engagement opportunities with Members of Congress and their staff.
A Living Exchange of International Institutional AI Ideas
Congress is not the only democratic institution navigating the internal adoption of artificial intelligence. The International Parliamentary Union (IPU) is collecting dozens of international use cases as part of its AI in Parliaments project in partnership with the Parliamentary Data Science Hub in the IPU’s Centre for Innovation in Parliament. From legislative chatbots to transcription and translation services to classification systems for amendments, and more, the list is a living archive from which other institutions can learn, explore, and find inspiration for further experimentation.
The Cost (and Benefit) of Being Polite to Your Favorite LLM
Recently on X, OpenAI CEO Sam Altman admitted that the company is spending millions of dollars in extra computing power processing the “please” and “thank you” individuals add to their prompts when engaging with the commercial GenAI LLM platform. However, some argue that by being polite in your prompt, it sets the tone of the engagement with the LLM and therefore affects the quality and nature of the LLM’s generated response. This story is an informative (and somewhat charming) read that may influence your AI engagement habits.
Exploring Product Model Funding for Government Technology
On Monday, the Niskanen Center and POPVOX Foundation hosted "Tech That Works: Unlocking Better Models for Digital Government," a session geared toward Congressional staff focused on innovative approaches to funding the development of government technology.
Working to bridge understanding between legislative intent and Executive branch implementation, the session aimed to demonstrate how adopting product model funding — a flexible, iterative approach widely used in the private sector — can save money and boost outcomes of federal government technology development and maintenance. A panel of experienced Executive branch technology program managers shared their firsthand experiences through practical examples and interactive demonstrations.
International “Fix”- Inspiration from Parliaments Around the World
Did you know that numerous UK parliaments use a shared legislative drafting and publication platform designed “in house” by the UK’s National Archives called Lawmaker? POPVOX Foundation Fellow Dr. Beatriz Rey recently interviewed Matthew Lynch who is service owner for the platform to learn about the creative process behind the platform’s development, design, and continual improvement.
For more news from international legislatures, subscribe to our Modern Parliament (“ModParl”) newsletter.
Stay in the Game
There are countless use cases for GenAI professionally, but our team has found that LLMs are also amazing resources to turn to for workout inspiration. Hello, personal trAIner! Have a workout goal in mind as you prep for pool season? Here is a sample prompt to get your mind racing:
Please help me create a weekly workout routine for the next four weeks. I would like it to include 3 days of cardio, 2 days of strength training, and 2 days of stretching. I consider myself to be a mid-beginner runner, an experienced weight lifter who needs to strengthen mostly my upper body (particularly traps and back), and a yoga novice. Please keep workout lengths to 30-45 mins max. At my disposal I have a treadmill, the ability to run outside, a 10lb weight set, and body weight.
Have a favorite self-care tip you’d like to share? Email us!
Shout Outs & Events
The Modernization Staff Association is hosting a resume and LinkedIn panel for staff on May 22 from 2-3 PM in Longworth 1539.
The annual AI Expo is taking place at the Washington DC Convention Center on June 2-4.
The House Office of the Whistleblower Ombuds has partnered with the Congressional Staff Academy to offer a certification for staffers who complete the suite of classes, Working With Whistleblowers Curriculum.
The Future is Now
The 6th Congressional Hackathon inspired a lot of interesting recommendations for advancing technology in the House, including one that calls for developing a Congressional large language model (LLM).
Some businesses are using AI-powered “decision theatres” for strategic planning. Can we get that for Congress please?
A British MP built a tool (himself!) to help his constituents follow all the words in his speeches — and then expanded for his fellow Members of Parliament.
A family in Arizona used AI to recreate their deceased loved one’s voice and image to deliver a video impact statement in court.
GPO Director Hugh Halpern explained how the Government Publishing Office uses emerging technologies to boost transparency while also preserving artisanal printing skills.
POPVOX Foundation has launched a beta chatbot trained on Congressional staff workflows and FAQs. Here are our lessons learned from its first week being publicly available.
A paralyzed man with a Neuralink implant successfully edited a YouTube video using only his thoughts.
Humans still carried the day in Beijing’s first-ever humanoid robot half-marathon (but how long will that be the case?).
Community colleges are being deluged by “bot students.”
Our friend, Dave Guarino, is pioneering evaluation standards for how commercial LLM models handle complex questions about government benefits, with a focus on SNAP.
About POPVOX Foundation
POPVOX Foundation is a nonpartisan nonprofit that helps democratic institutions keep pace with a rapidly changing world. Through publications, events, prototypes and technical assistance, the organization helps public servants and elected officials better serve their constituents and make better policy.



