StartupTribunal embraces vibe coding. This policy outlines how AI tools may be used in sprint submissions, what must be disclosed, and what we expect in terms of originality.
1. The Vibe Coding Commitment
We believe AI is a powerful collaborator in the creative process. Our sprints are designed for builders who use AI tools as part of their workflow — not as a replacement for thinking, designing, and iterating. The vibe coding commitment means:
- You are the architect. AI is your assistant. The creative direction, system design, and problem-solving should come from you.
- AI-generated code is welcome, but you must understand and be able to explain every part of your submission.
- Copy-pasting AI output without review, iteration, or adaptation is discouraged and may be flagged by our AI detection scoring.
- The goal is to build something genuinely useful and original — not to generate the most code in the least time.
2. Permitted AI Tool Usage
The following AI tools and practices are permitted during sprints:
- Code assistants — Tools like GitHub Copilot, Cursor, Amazon Q Developer, and similar IDE-integrated assistants are fully permitted.
- Chat-based AI — Using ChatGPT, Claude, Gemini, or other conversational AI for brainstorming, debugging, code review, and learning is permitted.
- AI code generation — Generating boilerplate, utility functions, tests, and scaffolding with AI tools is permitted.
- AI for documentation — Using AI to write or improve README files, comments, and documentation is permitted.
- AI for design — Using AI tools for UI/UX mockups, image generation, or design iteration is permitted.
3. Disclosure Requirements
Transparency is a core value. All participants must disclose their AI usage:
- Your submission README must include a section describing which AI tools were used and how they contributed to the project.
- If a significant portion of your codebase was AI-generated, this should be clearly stated.
- Failure to disclose AI usage when detected by VibeJudge may negatively impact your AI Detection score.
- Honest disclosure is rewarded — our scoring system values transparency and does not penalize responsible AI usage.
4. Originality Expectations
While AI tools are welcome, we expect genuine originality in every submission:
- Original concept — Your project should solve a real problem or explore a novel idea. Submissions that are trivial wrappers around AI APIs without meaningful logic will score poorly on innovation.
- Meaningful iteration — Your git history should show a development process with real commits, not a single bulk commit of AI-generated code.
- Understanding — You should be able to explain your architecture decisions, trade-offs, and implementation details if asked.
- No prompt injection — Attempting to manipulate the AI judge through README content, hidden files, or code comments is strictly prohibited and will result in disqualification.
5. What Is Not Permitted
The following practices are prohibited and may lead to disqualification:
- Submitting a project that is entirely AI-generated with no human direction, review, or iteration.
- Using AI to generate fake commit histories or artificially inflate repository activity.
- Embedding prompt injection attacks in your README, code comments, or file names to manipulate VibeJudge scoring.
- Submitting an existing open-source project or template as your own work, with or without AI modifications.
- Using AI to plagiarize or closely replicate another participant's submission.
6. How AI Detection Scoring Works
VibeJudge includes an AI Detection dimension worth 10% of your final score. This dimension evaluates:
- Commit authenticity — Are commits organic and incremental, or bulk-generated?
- Development velocity — Does the pace of development match realistic human-AI collaboration?
- Authorship consistency — Is the coding style consistent throughout the project?
- Iteration depth — Does the git history show real problem-solving and refinement?
- AI generation indicators — Are there signs of unreviewed AI output (generic variable names, boilerplate patterns, lack of project-specific context)?
The AI Detection score is adjusted based on the campaign's AI policy mode. Responsible, disclosed AI usage with genuine human direction will score well.
7. Detailed FAQ
For specific examples — what trips AI Detection, what a safe commit history looks like, what gets you disqualified — see the AI Usage section of the Sprint FAQ.
8. Related Policies
For the full competition rules including eligibility, submission requirements, judging criteria, and prize distribution, please review the Sprint Competition Rules.