Research Studies
Empirical investigations and theoretical studies exploring AI safety, superintelligence dynamics, and the capabilities of advanced AI systems.
Active Studies
Capability Scaling Laws in Large Language Models
Investigating the relationship between model scale, training compute, and emergent capabilities in large language models. Analyzing whether capability emergence follows predictable patterns or exhibits discontinuous jumps that could indicate threshold effects relevant to superintelligence theory.
Multi-Agent Coordination in AI Systems
Examining how multiple AI agents coordinate and compete in shared environments. Investigating whether collective intelligence properties emerge that differ from individual agent capabilities, and analyzing implications for distributed superintelligence scenarios.
Historical Analysis of AI Capability Predictions
Comprehensive analysis of AI capability predictions from 1950-2024, examining prediction accuracy, methodology quality, and common failure modes. Extracting lessons for improving current forecasting methods for advanced AI timelines and capabilities.
Completed Studies
Reasoning Capabilities in Current LLMs
Systematic evaluation of reasoning capabilities across major large language models (GPT-4, Claude, Gemini). Assessed mathematical reasoning, logical deduction, causal inference, and analogical thinking to establish baseline capabilities and identify failure modes.
Intelligence Explosion Theory: 60-Year Synthesis
Comprehensive synthesis of intelligence explosion theory from I.J. Good (1965) through contemporary models. Analyzed evolution of concepts, identified theoretical gaps, and examined relationship between classical theory and modern AI developments.
Research Collaboration Opportunities
Interested in collaborating on any of these studies or proposing new research directions? Francis Clase welcomes partnerships with researchers, institutions, and organizations.
Discuss Research Collaboration