Training emission
500+
Large legacy run baseline
Metric tons CO2e for one large legacy model training run (GPT-3 era estimate).
Commitment
A framework for using AI without surrendering labor dignity, memory, or moral agency. Not a manifesto. A working standard, applied daily.
Before asking whether a system is efficient, stable, or profitable, ask whether people inside it can breathe, eat, think, heal, move, create, relate, and participate without being crushed. If the answer is no, then the system is not serving life. It is managing decline.
When AI shapes language, structure, or output, say so. Not as a disclaimer but as honesty about where the work came from. The ideas are mine. The shaping is collaborative. That distinction matters.
AI can draft, research, and organize. It does not decide what matters, what to publish, or what to believe. Judgment stays with the person. The machine assists; it does not author values.
Automation should expand what people can do, not eliminate the people doing it. If a tool replaces a job, the question is not efficiency -- it is what happens to the person. That question comes first.
Every model run, every training cycle, every inference request has a cost in energy, water, and carbon. Use AI intentionally, not casually. The convenience of generation should not obscure the weight of it.
AI should not be used to monitor, rank, or profile people without their knowledge. The tools on this site do not track visitors, score behavior, or build profiles. Presence here is not data extraction.
When AI-assisted systems on this site break, they fail quiet -- not loud. No hallucinated data presented as truth. No automated actions without human review. If the system cannot verify, it says so.
Music, writing, and visual work on this site are human-originated. AI may help with structure, research, or iteration, but the creative impulse -- the reason something exists at all -- is not delegated.
Section focus
Artificial intelligence is not weightless software. It is physical infrastructure, electricity demand, cooling demand, and procurement pressure that moves through real communities. This panel tracks those hidden costs and where accountability has to be engineered into deployment decisions.
Archive snapshot: interactive charts are unavailable right now. Core accountability notes remain visible below.
Training emission
500+
Large legacy run baseline
Metric tons CO2e for one large legacy model training run (GPT-3 era estimate).
Cooling water
700k
Full-cycle cooling estimate
Liters of freshwater used for one large-model training cycle.
Inference energy
10x
Generative vs. search query
Typical energy ratio of one generative query versus one standard search query.
Accountability lens 1
Model training pushes thousands of accelerators for sustained windows. Emission totals depend as much on grid quality and facility geography as on model architecture.
Legacy model training versus familiar real-world baselines.
Interactive view is paused. Use the section notes for the same core metrics.
Note: BLOOM was trained on a lower-carbon grid, showing how regional energy mix can sharply reduce training emissions.
Training in high-coal grids can multiply emissions compared with hydro or nuclear-heavy regions. Site selection is a first-order policy decision, not a cosmetic optimization.
Frequent accelerator turnover compounds impact through manufacturing and e-waste. Accountability has to include procurement cadence, not just runtime electricity.
Accountability lens 2
Cooling towers and power generation chains both consume water. As AI demand scales, local water planning and transparency reporting become public-interest concerns.
Approximate trend line as AI infrastructure scaled between 2018 and 2022.
Interactive view is paused. Use the section notes for the same core metrics.
Accountability lens 3
Training is episodic, inference is continuous. Each generated response performs token-by-token computation, so scaled usage shifts baseline data center demand.
Interactive view is paused. Use the section notes for the same core metrics.
Search retrieves indexed records. Generative inference computes token probabilities in sequence, increasing accelerator utilization per request. At billions of interactions, this changes total grid demand and required backup capacity.
Counterbalance
Accountability is not anti-AI. It is pro-measurement. The same systems can reduce emissions when deployed toward grid balancing, logistics efficiency, and material innovation.
Interactive view is paused. Use the section notes for the same core metrics.
Real-time balancing can improve renewable integration and reduce curtailment waste.
Accelerated candidate search can compress R&D cycles for batteries and energy systems.
Route planning and precision operations reduce fuel, fertilizer, and water overhead.
This pledge exists because the tools are powerful and the defaults are not neutral. AI systems are built by companies with incentives that do not always align with the people using them. The convenience of generation can obscure the cost of extraction. The speed of automation can erase the dignity of labor.
This is not anti-technology. This site uses AI actively -- for research, writing assistance, code, data sync, and archive management. But using a tool and being accountable for how you use it are not the same thing. This page is the accountability part.
People should not have to prove they are worth helping by first becoming a crisis. Systems should not have to break before someone asks whether they were serving life.