Categories
Biology
View AllRyan Mehra, Anshoo Mehra
Large-language-model (LLM) “hallucinations” are usually condemned as reliability faults because they generate confident yet false statements [1]. Emerging research, however, finds that such confabulations mirror divergent thinking and can seed novel hypotheses [2, 3]. This study is conducted by an independent investigators with no physical laboratory but unlimited API access to OpenAI models(4o, 4o-mini, 4.1, 4.1-mini)—tests whether deliberately elicited hallucinations can accelerate medical innovation. We target three translational aims: (i) epistemological creativity for medicine, where speculative errors inspire fresh research questions; (ii) generative biomedical design, exemplified by hallucinated protein and drug candidates later validated in vitro [4]; and (iii) speculative clinical engineering, where imaginative missteps suggest prototypes such as infection resistant catheters [5]. A controlled prompt-engineering experiment compares a truth-constrained baseline to a hallucination-promoting condition across the four OpenAI models. Crucially, all outputs are scored for novelty and prospective clinical utility by an autonomous LLM-based “judge” system, adapted from recent self-evaluation frameworks [6], instead of human experts. The LLM judge reports that hallucination-friendly prompts yield 2–3× more ideas rated simultaneously novel and potentially useful, albeit with increased low-quality noise. These findings illustrate a cost-effective workflow in which consumer-accessible LLMs act both as idea generator and evaluator, expanding the biomedical creative search space while automated convergence techniques preserve epistemic rigor—reframing hallucination from flaw to feature in at-home medical R&D.
10.69831/e59eafc04e
Engineering
View AllRyan Mehra, Anshoo Mehra
Large-language-model (LLM) “hallucinations” are usually condemned as reliability faults because they generate confident yet false statements [1]. Emerging research, however, finds that such confabulations mirror divergent thinking and can seed novel hypotheses [2, 3]. This study is conducted by an independent investigators with no physical laboratory but unlimited API access to OpenAI models(4o, 4o-mini, 4.1, 4.1-mini)—tests whether deliberately elicited hallucinations can accelerate medical innovation. We target three translational aims: (i) epistemological creativity for medicine, where speculative errors inspire fresh research questions; (ii) generative biomedical design, exemplified by hallucinated protein and drug candidates later validated in vitro [4]; and (iii) speculative clinical engineering, where imaginative missteps suggest prototypes such as infection resistant catheters [5]. A controlled prompt-engineering experiment compares a truth-constrained baseline to a hallucination-promoting condition across the four OpenAI models. Crucially, all outputs are scored for novelty and prospective clinical utility by an autonomous LLM-based “judge” system, adapted from recent self-evaluation frameworks [6], instead of human experts. The LLM judge reports that hallucination-friendly prompts yield 2–3× more ideas rated simultaneously novel and potentially useful, albeit with increased low-quality noise. These findings illustrate a cost-effective workflow in which consumer-accessible LLMs act both as idea generator and evaluator, expanding the biomedical creative search space while automated convergence techniques preserve epistemic rigor—reframing hallucination from flaw to feature in at-home medical R&D.
10.69831/e59eafc04e
Health Sciences
View AllRyan Mehra, Anshoo Mehra
Large-language-model (LLM) “hallucinations” are usually condemned as reliability faults because they generate confident yet false statements [1]. Emerging research, however, finds that such confabulations mirror divergent thinking and can seed novel hypotheses [2, 3]. This study is conducted by an independent investigators with no physical laboratory but unlimited API access to OpenAI models(4o, 4o-mini, 4.1, 4.1-mini)—tests whether deliberately elicited hallucinations can accelerate medical innovation. We target three translational aims: (i) epistemological creativity for medicine, where speculative errors inspire fresh research questions; (ii) generative biomedical design, exemplified by hallucinated protein and drug candidates later validated in vitro [4]; and (iii) speculative clinical engineering, where imaginative missteps suggest prototypes such as infection resistant catheters [5]. A controlled prompt-engineering experiment compares a truth-constrained baseline to a hallucination-promoting condition across the four OpenAI models. Crucially, all outputs are scored for novelty and prospective clinical utility by an autonomous LLM-based “judge” system, adapted from recent self-evaluation frameworks [6], instead of human experts. The LLM judge reports that hallucination-friendly prompts yield 2–3× more ideas rated simultaneously novel and potentially useful, albeit with increased low-quality noise. These findings illustrate a cost-effective workflow in which consumer-accessible LLMs act both as idea generator and evaluator, expanding the biomedical creative search space while automated convergence techniques preserve epistemic rigor—reframing hallucination from flaw to feature in at-home medical R&D.
10.69831/e59eafc04e
Physics
View AllShivani Shivu Singh, Shivu Singh
The accelerating expansion of the universe remains one of the most profound challenges in modern cosmology. The standard ΛCDM model attributes this to a cosmological constant (Λ), yet persistent discrepancies — such as the Hubble tension and S₈ tension — suggest the need for alternative frameworks. This study proposes the Shivanic Force (SF), a dynamic repulsive effect arising from large-scale tension gradients in spacetime, generated by the asymmetric clustering of matter and expansion of cosmic voids. I introduce a modified Friedmann equation incorporating SF, and test its predictions against observational data from SDSS DR16 eBOSS LRG galaxies and Pantheon supernovae. The model increases the expansion rate at intermediate redshifts, improving consistency with local H₀ measurements and alleviating the S₈ tension by extending cosmic structure growth time. This work presents SF as a physically motivated, late-time phenomenon capable of addressing key cosmological tensions.
10.69831/6c4f3399fd