*1. Control what the AI touches*
- *Segment the work*: Let AI handle low-risk parts like formatting, boilerplate, or first-draft language. Keep it away from facts, numbers, dates, citations, and past performance.
- *No CUI in public tools*: If your draft has Controlled Unclassified Information, don’t use ChatGPT, Claude, etc. Use an on-prem or FedRAMP-approved GovCloud instance.
- *Ground the model*: Give it the exact docs to pull from. “Using only FAR Part 15 and the attached SOW, draft Section L.” Then it’s less likely to invent clauses.
*2. Force source-backing for anything factual*
Make this a rule: _If a fact can be checked, the AI must cite where it came from._
- *Citations required*: “For every case, statute, or FAR clause you cite, include the reporter and a URL to http://govinfo.gov or http://acquisition.gov.”
- *No numbers without source*: Dollar amounts, timelines, percentages need a bracketed source: [Price Proposal tab B-2, cell F14].
- *Ban invented examples*: Tell it: “Do not create hypothetical vendors, awards, or case names. If none exist, say ‘none found’.”
*3. Build a human verification layer*
Treat AI output like a first-year intern’s work: helpful but untrustworthy.
- *Clause checker*: Manually verify every FAR/DFARS/Agency clause number against http://acquisition.gov. AI loves to invent 52.244-99.
- *Case law check*: Run every case through Google Scholar or Westlaw. Hallucinated cases are the #1 issue in legal filings now.
- *Math & dates*: Recalculate all totals, option periods, and delivery dates. LLMs are bad at arithmetic.
- *SAM.gov / CPARS cross-check*: If the AI mentions past performance, pull the actual CPARS report.
*4. Use tooling to catch fabrications*
- *AI-detection + fact-check*: Tools like http://Originality.ai, Pangram, or Harvey can flag text that looks AI-generated. Then you know where to look hardest.
- *Redline comparison*: Diff the AI draft vs your source docs. Anything not traceable gets flagged.
- *Clause libraries*: Use a real clause database like Icertis or Cobblestone, not whatever the AI remembers.
*5. Document your process for compliance*
Agencies are starting to ask. 2024 OMB memo M-24-10 and DoD’s Responsible AI guidance basically say: you used AI, you own it.
- *AI-use log*: Track which sections used AI, which model, which prompts, who verified.
- *Attestation*: Add a cover note: “Sections 1-3 drafted with, verified by on. All facts checked against sources.”
- *Internal policy*: If you’re a contractor, write a policy: “No AI for cost/pricing data. All legal citations must be Shepardized by counsel.”[Tool][Name][Date]
*Quick prompt hygiene that helps*
When you do use AI, prompt defensively:
> “Draft the evaluation criteria section. Use only FAR 15.304 and the attached RFP. If you cannot find support for a statement, write [VERIFY]. Do not invent case names, statistics, or clause numbers.”
*The rule of thumb*: AI can give you 70% speed, but the last 30% is human verification. In government contracts, that 30% is what keeps you out of False Claims Act trouble.
No comments:
Post a Comment