top of page

How I Use AI-Assisted Research in the Absolute Relativity Project

(And Why the Usual "AI-Generated?" Question Misses the Point)


Purple energy wave pulses inside a dark lab, casting a mystical glow. A holographic display floats above a reflective surface.


The Question That Keeps Coming Up

When people encounter work that has been developed with heavy AI involvement, a question inevitably surfaces: "How much of this is AI-generated?"


It is a reasonable question. But the framing reveals something interesting about where we are culturally—caught between an old model of authorship and a new reality that does not quite fit it yet.

What people usually mean when they ask this: Did you type the words, or did a machine? Did AI invent the ideas? They are imagining a binary—0% AI means "real author," 100% AI means "fake" or "content mill." Somewhere in between feels suspicious.


This framing made sense when AI was mostly autocomplete text. It is already obsolete.

Asking "how much is AI-generated" is now a bit like asking "how much of your work is word-processor-generated?" Or "how much is calculator-generated?" The question is trying to measure authenticity using an old yardstick. The yardstick worked fine when the tool was simpler. It does not capture what is actually happening anymore.


But underneath the question, something legitimate is being asked. The real concerns are: Who am I actually interacting with? Is someone personally accountable for the claims? Is this a human-led project or an AI-driven content output? Can I trust that the underlying ideas are coherent and not just stitched together by a model?


Those are fair questions. They deserve direct answers.


What Absolute Relativity Actually Is

Before getting into the AI question, it helps to know what the project is—just enough to see why any of this matters.


Absolute Relativity is a long-term attempt to build a new map of reality. It treats present experience as the starting point rather than matter. The "public world"—the stable environment we all seem to share—is understood as a stabilized, shared outward record that emerges under constraints. The framework tries to connect things that are usually kept separate: consciousness and qualia, physics and measurement structure, the logic of objectivity and shared reality.


What it is not: a personality cult, a vague spiritual manifesto, or an attempt to win arguments online. It is also not "AI discovered a new theory." The theory was developed over roughly fifteen years of quiet, private work—long before the current generation of AI tools existed.


Going public is a new phase. The goal is to establish a stable record, invite scrutiny, and build a test program. The work should be evaluated on its own terms—its definitions, its internal coherence, its evidence pathways, its tests and falsifiers.


Computer on a dark desk with glowing graphs and flowing data lines on screen. Glass of water nearby, futuristic and analytical mood.

A Unique Vantage Point on AI

Spending years dedicated to high-level theory development creates an unusual position when AI enters the workflow. The focus was entirely on building a new picture of reality—avoiding the technical weeds, staying at the level of conceptual architecture and philosophical direction.


When AI became part of the process, something became immediately clear.


Everything involved in the core work—the conceptual architecture, the philosophical direction, the "why" behind the unfolding—AI is still incapable of doing. Even now, with more advanced models. It can handle pieces. It cannot see the whole or guide its development.


But there is another side. AI is remarkably good at technical depth, formal exploration, working in the weeds. It can hold vast amounts of detail in active memory. It can check consistency across hundreds of pages. It can translate rough intuitions into precise formal language.


These are genuinely different kinds of intelligence. And the most effective approach is not to compete with AI on its strengths—it is to let each side do what it is good at.


A dense field of asteroids and particles floats in space, separated by a glowing purple line. The scene is dark and mysterious.

AI-Assisted Research: Division of Labor, Not Shortcut

One of the greatest strengths in working with AI is not getting in the way of what it is good at.


The principle behind AI-assisted research is simple: do not force yourself into roles where machines are already better. Focus on what AI cannot do—right now—and let AI do what it can. This is not laziness. It is optimization of limited human capacity toward where human insight is essential.


The roles separate clearly.


The human role: conceptual architecture—what the theory claims and why. The philosophical big picture and why it unfolds this way. Interpretation—which directions matter, which insights are real. Judgment and responsibility—what is included, excluded, asserted. Integrity—what counts as evidence, what is uncertain, what is speculative.


The AI role: expression and compression—making it readable. Structure—organizing large bodies of material. Technical exploration—mathematics, formal language, simulation scaffolding. Consistency checking—finding contradictions, unclear definitions. Helping translate concepts into formal statements and testable forms.


AI assists the process. I own the claims.


A person stands in a purple-hued grid room, facing bright light. The scene is futuristic and serene, with a minimalist design.

The Scaling Dynamic: As AI Goes Deeper, We Go Higher

Here is something worth noticing about where this is heading.


AI is scaling exponentially in certain kinds of intelligence—technical depth, formal manipulation, pattern recognition across massive datasets, working "in the weeds." It is getting better at these things faster than most people expected.


What does this mean for human interaction with complex work?


As AI scales up in technical capability, humans need to scale out—interacting with complex work at higher and higher levels.


Scaling out looks like: seeing the big picture. Using intuition to guide direction. Asking the right questions rather than computing the answers. Trusting that AI understands certain things very well—the same way someone trusts a scientific calculator when they type in something complex and get an answer.


The trust model works the same way humans have always related to powerful tools. You trust the calculator's arithmetic, not its judgment about which problem to solve. You trust AI for calculation and technical exploration and formal consistency—not for deciding what matters or judging meaning or choosing direction.


The trajectory is clear. AI will create increasingly complex technical structures that exceed human ability to verify step-by-step. But that does not mean humans cannot understand in the way humans understand—at the level of architecture, direction, and meaning. The interaction just moves to a higher altitude.


This is not a loss. It is a partnership that gets more powerful as it develops.


The Word Processor Analogy

People once might have judged writers using word processors versus typewriters. Easier editing, rearranging, refining. That did not make the ideas less original.


AI in the writing process works similarly. It is part of the drafting and editing loop. You iterate with it to sharpen clarity and reduce misunderstanding. It is not "press button, publish output."


The actual workflow looks like this: deep engagement with the material. Rough notes and fragments. Interaction and iteration with AI. Reject, redirect, refine—"No, that is not it." Repeat until it matches the actual meaning.


The key point: collaborative refinement of something already understood from inside. The understanding comes first. The tool helps express it.


The Calculator Analogy (And Beyond)

AI is not only language polish. It is also a technical instrument.


Think about the progression: arithmetic, then calculators, then scientific calculators, then computational tools, now AI. No one is "less of a mathematician" for not doing every step by hand. The point is freeing capacity for the parts where human insight is essential.


The spirit of the thing: using tools is normal. Refusing tools is not virtue. The question is whether you understand what you are doing and take responsibility for the results.


The Part AI Still Does Not Understand

Even when AI handles the technical and formal side extremely well, it still misses the deeper philosophical heart of a project like this. It does not fully track the "why" that drives the unfolding of the theory.


This shows up in practice. AI can give technically competent responses that miss the point entirely. It can follow the rules of the system without grasping what the system is for. It can produce elegant formalizations that drift from the core insight.


Why this matters: it shows that heavy AI use is not "AI leading the theory." AI accelerates formalization and exploration. It cannot substitute for the conceptual insight that anchors the work.


AI may eventually develop this capacity. But today, it still does not—and effective workflows reflect that reality.


Glowing blue network lines intersect on a dark background, forming a geometric pattern. Light beams radiate outward, creating a techy ambiance.

Why People Are Confused Right Now

There is a pattern worth noticing in how many people use AI.


Many people use AI to generate the ideas, then use themselves mainly to curate and publish. The AI does the thinking; the human does the selecting.


This creates a problem. Lots of plausible language with weak conceptual grounding. Difficulty distinguishing genuine research from content generation. The "AI slop" phenomenon—material that sounds right but is not anchored to anything.


The approach described here is the opposite. Human-led insight first, then tool-assisted expression and technicalization second. Not replacing thinking—freeing capacity for deeper thinking.


The same tool can be used in opposite ways. The question is not "did you use AI?" It is "how did you use it, and who is accountable?"


Purple light beams radiate from a horizon in a dark, starry setting, creating a futuristic and serene atmosphere.

The Credibility Bridge

How does heavy AI use not become "trust me"?


The answer: the work should stand on itself. Not on personal authority. Not on charisma. Not on credentials.


What evaluation should be about: definitions. Internal coherence. Evidence pathways. Tests and falsifiers.


This is why the Absolute Relativity project emphasizes a particular publication posture: frozen snapshots with versioned publication. Artifacts and receipts. Clear separation between claims, evidence, tests, and narrative interpretation.


"Receipts-first" is becoming necessary in the age of AI. The stronger AI gets, the more we need auditability, traceability, and stable references.


The goal is not asking people to trust a vibe. It is offering a structured public record that can be evaluated on its own terms.


A Default Answer

When someone asks "how much of this is AI-generated," here is the answer.


First: the theory is human-led. AI did not invent the core ideas. The conceptual architecture was developed over fifteen years of private work, mostly before current AI tools existed.


Second: AI is heavily used as a tool for writing, structuring, and technical exploration. It is part of the workflow like a word processor and a calculator.


Third: the final claims are personally accountable, and the project publishes in a way that makes the record auditable.


A clarifying question to ask in return: "When you say AI-generated, do you mean the words, or the ideas and claims?"


The one-sentence version: AI assists expression and technical exploration, but the underlying conceptual architecture and the claims are human-led and accountable.


Wavy grid lines with glowing teal and gold paths, set against a dark, moody background, creating a futuristic and dynamic visual effect.

Why This Matters Beyond Any Single Project

AI is becoming part of the intellectual toolchain the way calculators became part of mathematics.


The meaningful question is not whether AI was used. It is: who is accountable for the claims? Is there a disciplined record that allows real evaluation?


Tools are evolving faster than cultural norms for evaluating work. New standards for accountability and auditability are needed. The old binary—human versus machine—is giving way to something more nuanced: human-led, tool-assisted, publicly auditable.


As AI becomes capable of producing increasingly complex technical structures, the human role does not disappear. It elevates. We move from calculating to directing, from technical execution to architectural vision. The partnership gets more powerful, not less meaningful.


Bottom Line

AI is becoming part of the intellectual toolchain the way calculators became part of mathematics.


The meaningful question is not whether AI was used; it is who is accountable for the claims, and whether there is a disciplined record that allows real evaluation.


As AI scales up in technical capability, we scale out—seeing bigger pictures, asking better questions, trusting the tool for what it is good at while staying responsible for what matters.


That is the approach this project is built around.

Comments


bottom of page