AI Power Badge — Python integration
====================================

This adds AI Power Badge tracking to your own Python program in two
lines of code. The badge is a Chrome extension that displays your
real-time AI usage as wattage. By default it tracks browser usage on
sites like ChatGPT and Claude. With this snippet, your local Python
script's API calls show up too.

What you need
-------------
1. The AI Power Badge Chrome extension installed (v1.9.0 or later).
2. A Python program (3.8+) that uses the official OpenAI, Anthropic,
   or Google Gemini SDK.
3. The ai_power_badge_telemetry.py file from this download.

Installation
------------
1. Save ai_power_badge_telemetry.py next to your program's main script.
   (No pip install, no separate package.)

2. Open your program's main Python file. Near the top, after your
   AI SDK imports, add these two lines:

       import ai_power_badge_telemetry
       ai_power_badge_telemetry.enable("External: Your Program Name")

   Replace "External: Your Program Name" with whatever you want shown
   in the badge popup -- e.g. "External: Proposal Writer" or
   "External: My Research Agent." Keep the "External:" prefix so it
   doesn't get visually confused with browser-tracked rows.

3. Run your program normally. Each AI API call will be logged.

4. To see the calls in the badge: click the AI Power Badge icon in
   your Chrome toolbar, then click "Import activity file." Pick the
   JSONL file at the path the popup shows you.

Where the log file lives
------------------------
By default, the snippet writes to:

    Windows:  C:\Users\<you>\Documents\AI Power Tracking\external_intervals.jsonl
    Mac:      ~/Documents/AI Power Tracking/external_intervals.jsonl
    Linux:    ~/Documents/AI Power Tracking/external_intervals.jsonl

This is the same path the AI Power Badge popup expects by default.
You can copy this path from the badge popup with one click.

What gets logged
----------------
For each AI API call your program makes, one JSON row is appended to
the log file. The row contains:

  - A unique ID (so re-importing the same file doesn't double-count)
  - Start and end timestamps in milliseconds
  - The model name (translated to the badge's canonical labels where
    possible -- e.g. "gpt-5-pro" -> "GPT-5 Pro")
  - The site name you passed to enable()
  - Token counts (input and output), if your SDK returned them

Nothing else is logged. Prompts, responses, system messages, and any
other content are NEVER written to disk by this module.

What this module does NOT do
----------------------------
- Does not send anything over the network. Everything is local files.
- Does not modify your API keys or your SDK configuration.
- Does not change the responses you get from the API.
- Does not break your program if anything goes wrong inside the
  telemetry. Every operation is wrapped in try/except; any failure
  falls through to the original SDK call.

How to confirm it's working
---------------------------
After running your program once, open the snippet file in a terminal:

    python ai_power_badge_telemetry.py

It'll print the log file path and tell you how many lines are in it.
If the count went up after a real run, telemetry is working.

Or just check the log file directly -- it's a normal text file you
can open in Notepad or VS Code. Each line should look like:

    {"external_id":"...","started_at":1714780800000,"ended_at":1714780847000,"model":"Claude Opus 4.1","site_name":"External: Proposal Writer","tokens_in":4200,"tokens_out":1800}

What the badge does with it
---------------------------
Open the AI Power Badge popup, scroll to "External program import,"
click "Import activity file," pick the JSONL file. The badge places
each call on the timeline at its real timestamp. Old calls fill in
the past (they show up in your charts and may update past daily
peaks); recent calls show in the rolling-hour number.

Re-importing the same file is safe -- the badge dedupes by external
ID, so only new rows count each time. You can import the same file
once a day or once a week, whatever fits your workflow.

The log file is automatically trimmed on each program startup to drop
rows older than 7 days. So it doesn't grow without bound -- and the
badge ignores rows older than 7 days at import time anyway.

Frequently asked
----------------
Q: Does this work if I'm using a different AI SDK?
A: No -- this snippet only wraps the official openai, anthropic, and
   google-genai SDKs. If you're using a different library or your
   own HTTP client, you can still log calls manually:

       import time
       import ai_power_badge_telemetry as t
       start_ms = int(time.time() * 1000)
       response = your_custom_api_call(...)
       end_ms = int(time.time() * 1000)
       t.log_call(
           model="GPT-5 Pro",
           site_name="External: My Program",
           started_ms=start_ms,
           ended_ms=end_ms,
           tokens_in=getattr(response, "input_tokens", None),
           tokens_out=getattr(response, "output_tokens", None),
       )

Q: Can I write to a different log file location?
A: The default path is what the badge popup expects. If you want a
   different location, you'd need to override the snippet's
   _log_file_path() function or write your own log_call() that emits
   the same JSONL schema to wherever you want. The badge's file
   picker accepts files from any location -- the displayed default
   path is just a convenience.

Q: My program runs all day. Does the log grow forever?
A: No. The snippet trims rows older than 7 days every time you call
   enable(). For long-running programs, call
   ai_power_badge_telemetry.trim_log() periodically yourself.

Q: I'm worried about privacy. Is anything sent off my machine?
A: No. The snippet writes only to a local file. The badge extension
   reads only the local file you explicitly pick in its file picker.
   Neither component connects to any remote service.

Q: What happens if my program crashes mid-call?
A: Nothing bad. The log file uses append-mode writes that are
   atomically flushed line-by-line. A crash before the API call
   returns means that call simply isn't logged. A crash after it
   returns but before flush is essentially impossible with the
   line-buffered I/O Python uses by default.

Q: Can I disable telemetry temporarily?
A: Just don't call enable(). If you've already called it earlier in
   the program, you can disable further logging by removing the
   wrapped methods, but the easier pattern is to gate enable() behind
   a config flag in your program.

License
-------
Public domain. Use it however you like.
