The AI research community is experiencing unprecedented growth, but beneath the surface lies a troubling reality: our conference model is breaking. Recent data reveals a system in crisis, one that threatens the very foundations of scientific exchange, equity, and researcher well-being.
The Numbers Don't Lie
Over the past decade, AI publication rates have exploded with near-exponential growth (R² = 0.979, p < 0.001). By 2030, projections suggest over 65,000 submissions annually—more than tripling 2024's 21,400 papers. Meanwhile, per-author publication rates have doubled to exceed 4.5 papers annually, with faculty on track to publish one paper every two months by 2030.
This isn't sustainable productivity—it's a treadmill of hyper-competition that prioritizes quantity over depth.
Four Dimensions of Crisis
1. Scientific Mission: The Research Lifecycle Lag
AI research moves at breakneck speed. Studies show AI agent capabilities double approximately every seven months. Yet our conference cycles—from submission to presentation—also last seven months. The result? Research is often outdated by the time it's published, rendering significant community effort inefficient.
The growing gap between rapid iteration and annual conferences creates a fundamental misalignment with the goal of timely knowledge dissemination.
2. Environmental Toll: Carbon Emissions at City Scale
NeurIPS 2024's travel emissions alone exceeded 8,254 tCO₂e—more than the daily carbon output of Vancouver's 680,000 residents. As conferences grow, this environmental burden becomes increasingly untenable, creating both ecological damage and economic barriers that undermine diversity, equity, and inclusion (DEI) initiatives.
Flying 15,000 kilometers to present a 5-minute poster is not just expensive—it's environmentally indefensible.
3. Psychological Crisis: A Community Under Strain
Analysis of 405 Reddit threads from r/MachineLearning reveals a stark truth: over 71% of conference-related discussions express negative sentiment, with 34.6% mentioning mental health keywords like "anxiety," "burnout," and "stress."
The high-stakes, all-or-nothing evaluation process creates a toxic environment where:
- Doctoral students face immense pressure and job insecurity
- Researchers experience identity anxiety and diminished academic belonging
- The sheer scale of mega-conferences isolates individuals within crowds of thousands
This psychological strain isn't a side effect—it's a symptom of systemic dysfunction.
4. Physical Limits: Venue Capacity Bottleneck
NeurIPS 2024 implemented a lottery system for non-author registrations as attendance neared the Vancouver Convention Centre's 18,000-person capacity. This artificial scarcity limits participation precisely for those who benefit most: students, early-career researchers, and unaffiliated attendees outside established networks.
Even the world's largest venues cannot accommodate exponential growth indefinitely.
Why Incremental Fixes Fail
Well-intentioned measures like submission caps or multi-site conferences address symptoms, not root causes:
- Submission limits don't reduce "publish-or-perish" pressure; they merely shift it, penalizing junior researchers who need publication records most
- Multi-site conferences (e.g., NeurIPS 2025 in Mexico City and Copenhagen) reduce some travel but maintain centralized review burdens, synchronized deadlines, and high-stakes evaluation
- Hybrid formats treat remote participation as secondary, failing to fundamentally reimagine conference structure
The centralized model—with its monolithic format, single-deadline submissions, and mega-venue gatherings—is structurally incompatible with sustainable growth.
The Path Forward
The data is unambiguous: our current conference model violates its own core mission. It delays knowledge exchange, creates environmental harm, erodes community well-being, and enforces exclusivity through physical constraints.
A lasting solution requires dismantling centralization itself and rebuilding around three principles:
- Decentralized organization – Regional hubs replace mega-venues
- Temporal flexibility – Rolling review replaces annual deadlines
- Community agency – Bottom-up organization replaces top-down control
The Community-Federated Conference (CFC) model offers exactly this paradigm shift. By separating peer review, presentation, and networking into globally coordinated but locally executed components, we can preserve scientific rigor while restoring sustainability, accessibility, and genuine community building.
The question isn't whether change is needed—the data makes that abundantly clear. The question is whether we have the courage to reimagine how we gather, share knowledge, and build the future of AI research.
This analysis is based on research published in arXiv:2508.04586. Data sources include CSRankings.org (2015-2024), official conference statistics, Reddit sentiment analysis, and environmental impact modeling.
