An AI-generated score (0–100) that estimates the relative effort and complexity involved in completing each pull request. This is calculated using contextual metadata (code changes, issue data, etc.) and LLM-based analysis. Learn more about how we calculate the Productivity Score here.
The percentage of lines that engineers modify or delete from their own recently written code, calculated as (Total Reworked Lines / Total Changed Lines) × 100%. A reworked line is one that was originally written by the same author within the last 30 days. This metric can be analyzed at the pull request, engineer, team, or organization level to identify patterns of inefficiency or areas where requirements may be unclear.
The duration between when a pull request is created and when the first comment is posted on it. This metric measures how quickly team members engage with new pull requests, indicating responsiveness and collaboration patterns. A shorter time to first comment typically suggests active team engagement and faster feedback cycles.
The percentage of engineers who used AI tools (e.g., Copilot, Cursor) during the selected time period. This helps measure how widely AI is being integrated into daily workflows.
The average increase in Productivity Score for an engineer on days when they use AI tools compared to days they don’t. This reflects the measurable impact of AI on engineering output.
The AI model most frequently used across the organization (e.g., GPT-4, Claude, Codium). Useful for understanding model preferences and potential standardization.
The programming language most frequently associated with AI-assisted development during the selected period. Helps identify where AI is most leveraged across your codebase.