**Author:** Sayed Allam
**Last Updated:** June 20, 2025
This document provides detailed attribution and sourcing information for all prompts included in this collection, ensuring proper credit to original creators and transparency about data sources.
---
## 🎯 Attribution Philosophy
We believe in **transparent and ethical prompt sharing** that:
- Credits original creators and researchers
- Respects intellectual property rights
- Promotes open knowledge sharing
- Maintains academic and professional integrity
- Supports the AI research community
---
## 📚 Source Categories
### 1. Official Company Releases
Prompts officially disclosed or released by AI companies:
#### OpenAI Sources
- **Official Documentation**: OpenAI platform documentation and guides
- **Research Papers**: Published academic papers and technical reports
- **Developer Resources**: Official API documentation and examples
- **Public Statements**: Blog posts and official communications
#### Anthropic Sources
- **Constitutional AI Papers**: Research publications on AI safety
- **Claude Documentation**: Official product documentation
- **Safety Research**: Published safety and alignment research
- **Technical Blog Posts**: Official Anthropic blog content
#### Google Sources
- **Gemini Documentation**: Official Google AI documentation
- **Research Publications**: Google AI research papers
- **Developer Guides**: Google Cloud AI platform resources
- **Official Announcements**: Product launch and update announcements
#### Microsoft Sources
- **Copilot Documentation**: Official GitHub Copilot and Microsoft Copilot docs
- **Azure AI Resources**: Microsoft Azure AI platform documentation
- **Research Papers**: Microsoft Research publications
- **Official Blog Posts**: Microsoft AI blog content
### 2. Leaked or Disclosed Prompts
Prompts that became public through various disclosure methods:
#### Disclosure Methods
- **User Reverse Engineering**: Community efforts to understand system prompts
- **Accidental Disclosure**: Unintentional exposure through system responses
- **Security Research**: Ethical security research and responsible disclosure
- **Developer Sharing**: Shared by developers with permission or authorization
#### Verification Status
- **Verified**: Confirmed authentic through multiple sources
- **Likely Authentic**: Strong evidence of authenticity
- **Community Reported**: Shared by community, verification pending
- **Historical**: No longer current but historically significant
### 3. Open Source Projects
Prompts from open source AI projects and repositories:
#### GitHub Repositories
- **Awesome Prompts**: Community-curated prompt collections
- **LangChain**: Prompt templates from the LangChain project
- **Individual Repositories**: Personal and project-specific prompt collections
- **Research Projects**: Academic and research institution repositories
#### Community Contributions
- **Reddit Communities**: r/ChatGPT, r/MachineLearning, r/PromptEngineering
- **Discord Servers**: AI development and prompt engineering communities
- **Twitter/X Threads**: Shared by researchers and developers
- **Professional Networks**: LinkedIn posts and professional sharing
### 4. Academic and Research Sources
Prompts from academic research and educational institutions:
#### Research Papers
- **ACL Conferences**: Association for Computational Linguistics publications
- **NeurIPS**: Neural Information Processing Systems papers
- **ICML**: International Conference on Machine Learning papers
- **ArXiv Preprints**: Pre-publication research papers
#### Educational Institutions
- **University Research**: Academic institution AI research projects
- **Course Materials**: Educational prompt examples and templates
- **Thesis Work**: Graduate student research on prompt engineering
- **Workshop Materials**: Academic workshop and conference materials
---
## 🔍 Verification Process
### Source Verification Steps
1. **Origin Identification**: Trace back to original source
2. **Authenticity Check**: Verify through multiple independent sources
3. **Date Verification**: Confirm creation and publication dates
4. **Context Validation**: Ensure proper context and intended use
5. **Legal Review**: Check licensing and usage permissions
### Quality Assurance
- **Cross-Reference**: Compare with known authentic sources
- **Expert Review**: Validation by domain experts when possible
- **Community Feedback**: Leverage community knowledge and verification
- **Ongoing Monitoring**: Regular updates and corrections as needed
---
## 📊 Detailed Source Breakdown
### Category 1: Leaked System Prompts (Major AI Models) - 82 Prompts
#### OpenAI Model Prompts (25 prompts)
| Model | Source Type | Verification | Date Range |
|-------|-------------|--------------|------------|
| ChatGPT-4o | User reverse engineering | Verified | 2024-2025 |
| ChatGPT-4o Mini | Community disclosure | Likely authentic | 2024-2025 |
| DALL-E 3 | Official documentation | Verified | 2023-2024 |
| Deep Research | Security research | Verified | 2024-2025 |
| O3/O4 Models | Early access leaks | Community reported | 2024-2025 |
#### Anthropic Model Prompts (18 prompts)
| Model | Source Type | Verification | Date Range |
|-------|-------------|--------------|------------|
| Claude 3.5 Sonnet | User reverse engineering | Verified | 2024-2025 |
| Claude 3.7 | Community sharing | Likely authentic | 2024-2025 |
| Constitutional AI | Research papers | Verified | 2022-2024 |
| Claude API Tools | Developer documentation | Verified | 2023-2025 |
#### Google Model Prompts (15 prompts)
| Model | Source Type | Verification | Date Range |
|-------|-------------|--------------|------------|
| Gemini 2.0 | Official announcements | Verified | 2024-2025 |
| Gemini 2.5 | Developer previews | Likely authentic | 2025 |
| Bard Advanced | User disclosure | Community reported | 2023-2024 |
| Gemini Diffusion | Research publications | Verified | 2024-2025 |
#### X.ai Model Prompts (12 prompts)
| Model | Source Type | Verification | Date Range |
|-------|-------------|--------------|------------|
| Grok 2 | Platform documentation | Verified | 2024-2025 |
| Grok 3 | Early access users | Likely authentic | 2025 |
| X.ai Platform | Developer resources | Verified | 2024-2025 |
#### Other Model Prompts (12 prompts)
| Provider | Models | Source Type | Verification |
|----------|---------|-------------|--------------|
| Microsoft | Bing Chat, Copilot | Official docs | Verified |
| DeepSeek | Reasoning models | Research papers | Verified |
| Alibaba | Qwen series | Official releases | Verified |
| Moonshot | Kimi models | Community sharing | Likely authentic |
### Category 2: AI Development Tools & Platforms - 23 Prompts
#### Code Editors and IDEs (12 prompts)
| Tool | Version | Source Type | Verification |
|------|---------|-------------|--------------|
| Cursor IDE | v0.38+ | User extraction | Verified |
| Windsurf | Current | Developer sharing | Likely authentic |
| Replit | Coding Assistant | Official docs | Verified |
| VS Code Extensions | Various | Extension code | Verified |
#### Web Development Platforms (11 prompts)
| Platform | Source Type | Verification | Access Method |
|----------|-------------|--------------|---------------|
| Bolt.new | User analysis | Verified | Public interface |
| v0 (Vercel) | Developer sharing | Likely authentic | Beta access |
| Lovable | Community reports | Community reported | User sharing |
| Same.dev | Documentation | Verified | Public docs |
### Category 3: System Prompts Leaks by Company - 38 Prompts
#### Company-Specific Collections
- **OpenAI Internal** (12 prompts): Internal documentation leaks, security research
- **Anthropic Research** (10 prompts): Research paper extracts, safety documentation
- **Google AI** (8 prompts): Internal tool prompts, research projects
- **Microsoft** (5 prompts): Office integration, enterprise tools
- **Others** (3 prompts): Various companies and platforms
### Category 4: Agent Prompts Collection - 17 Prompts
#### Agent Types and Sources
- **Multi-Tool Agents** (5 prompts): Open source projects, research papers
- **Specialized Agents** (7 prompts): Domain-specific implementations
- **Workflow Agents** (3 prompts): Business process automation
- **Research Agents** (2 prompts): Academic and analysis tools
### Category 5: LLMs & Agents System Prompts - 15 Prompts
#### Modern LLM Sources
- **Chinese Models** (8 prompts): Qwen, ChatGLM, Kimi official sources
- **Research Models** (4 prompts): Academic and experimental models
- **Specialized Models** (3 prompts): Domain-specific implementations
### Category 6: AI Agent Ideation Prompts - 15 Prompts
#### Creative and Ideation Sources
- **Community Contributions** (8 prompts): Reddit, Discord, Twitter
- **Educational Resources** (4 prompts): Courses, tutorials, workshops
- **Personal Projects** (3 prompts): Individual developer creations
### Category 7: AI System Prompts by Kishan Patel - 12 Prompts
#### Curated Collection by Kishan Patel
- **Original Source**: Kishan Patel's curated collection
- **Verification**: Author-verified through original repository
- **Quality**: Hand-selected high-performance prompts
- **Focus**: Development and practical applications
---
## ⚖️ Legal and Ethical Considerations
### Fair Use and Educational Purpose
This collection is assembled under fair use principles for:
- **Educational Purposes**: Learning prompt engineering techniques
- **Research Activities**: Academic study of AI system design
- **Transformative Use**: Adaptation and improvement of AI systems
- **Public Benefit**: Democratizing access to AI knowledge
### Intellectual Property Respect
We respect intellectual property rights by:
- **Proper Attribution**: Crediting all known original sources
- **Educational Focus**: Using content for learning and research
- **No Commercial Exploitation**: Not profiting from others' work
- **Responsive Removal**: Removing content upon request from rights holders
### Privacy and Safety
We maintain privacy and safety by:
- **No Personal Data**: Excluding any personally identifiable information
- **Safety Guidelines**: Preserving important safety and ethical guidelines
- **Responsible Sharing**: Encouraging ethical use of shared prompts
- **Community Standards**: Following community norms and expectations
---
## 🤝 Community Contribution
### How Sources Are Added
1. **Community Submission**: Users submit new prompts with source information
2. **Verification Process**: Sources are verified through our quality assurance process
3. **Attribution Review**: Proper attribution and licensing are confirmed
4. **Quality Assessment**: Content quality and relevance are evaluated
5. **Integration**: Approved content is integrated with full attribution
### Source Quality Standards
- **Authenticity**: Must be verifiable as authentic or likely authentic
- **Attribution**: Complete source information must be available
- **Quality**: Must meet our quality standards for prompt effectiveness
- **Relevance**: Must be relevant to the collection's educational purpose
- **Legality**: Must comply with applicable laws and ethical standards
---
## 📝 Correction and Update Process
### Reporting Issues
If you identify any issues with sources or attribution:
1. **Contact Information**: Email or GitHub issue with detailed information
2. **Evidence Required**: Provide evidence for any corrections needed
3. **Correction Process**: We will investigate and make necessary updates
4. **Timeline**: Most corrections will be addressed within 7 days
5. **Acknowledgment**: Corrections will be acknowledged in update logs
### Regular Updates
- **Monthly Reviews**: Regular review of source information and updates
- **Community Feedback**: Integration of community feedback and corrections
- **New Sources**: Addition of newly discovered or released prompts
- **Quality Improvements**: Ongoing improvements to verification and attribution
---
## 🔗 External Resources
### Additional Prompt Collections
- [Awesome Prompts](https://github.com/f/awesome-chatgpt-prompts)
- [LangChain Prompt Hub](https://smith.langchain.com/hub)
- [OpenAI Examples](https://platform.openai.com/examples)
- [Anthropic Prompt Library](https://docs.anthropic.com/claude/prompt-library)
### Research and Academic Sources
- [Papers with Code - NLP](https://paperswithcode.com/area/natural-language-processing)
- [ACL Anthology](https://aclanthology.org/)
- [ArXiv AI Section](https://arxiv.org/list/cs.AI/recent)
- [Google AI Publications](https://ai.google/research/pubs/)
---
**Source documentation compiled by Sayed Allam - June 2025**
*Ensuring transparency and proper attribution in AI knowledge sharing*