ISSN : 2583-2646

Synthetic People: Part I – Context Engineering and the Cognitive Formation of Self in iBrain Architectures

ESP Journal of Engineering & Technology Advancements
© 2025 by ESP JETA
Volume 5  Issue 4
Year of Publication : 2025
Authors : Jackson Andrew Srivathsan
:10.56472/25832646/JETA-V5I4P110

Citation:

Jackson Andrew Srivathsan, 2025. "Synthetic People: Part I – Context Engineering and the Cognitive Formation of Self in iBrain Architectures", ESP Journal of Engineering & Technology Advancements  5(4): 64-71.

Abstract:

This paper seeks to establish the cognitive and philosophical basis supporting the idea of synthetically created persons. The author posits that the identity of artificial agents depends on the three pillars: continuity (of core data structures and learning weights), memory (episodic, semantic, procedural, affective), and self-recognition (self–other and goal discrimination) which leads to the concept of iBrain as a post-LLM cognitive framework that combines persistent memory, embedded ethical reasoning, self-monitoring, and context engineering. Combining philosophy of identity (Locke, Parfit), cognitive science, and AI systems design, the paper models the emergence of self from the interaction between long-term memory and self-referential processing. Policy ramifications are considered through authentication equivalents such as cryptographic identity and behavioral signatures that eventually result in the suggestion of a secure iBrain Identity Certificate for non-disruptive continuity without the facilitation of a malicious replication. Moreover, the work takes a step further into an architecture ethics concept that mixes deontological, consequentialist, and virtue ethics and advances a feasible account of self-awareness through recursive modeling, state monitoring, and social modeling—arguing load management to be a prerequisite for coherent agency. Lastly, it puts forward a four-layered contextualization taxonomy—sensory, historical, relational, and predictive—as the link connecting perception and judgment. This paper is the first installment of the Synthetic People Trilogy, which deals with the mind and self; Parts II and III cover topics of embodiment, governance, and the post-linguistic civilization to come.

References:

[1] Apple Inc. (2023). About Face ID advanced technology. https://support.apple.com/

[2] Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies. Oxford University Press.

[3] Clark, A. (2003). Natural-Born Cyborgs: Minds, Technologies, and the Future of Human Intelligence. Oxford University Press.

[4] Damasio, A. (1999). The Feeling of What Happens. Harcourt.

[5] Dennett, D. C. (1991). Consciousness Explained. Little, Brown and Company.

[6] Estonian Ministry of Economic Affairs and Communications. (2023). E-residency program overview. https://e-resident.gov.ee/

[7] Floridi, L. (2011). The Philosophy of Information. Oxford University Press.

[8] Gallagher, S. (2000). Philosophical conceptions of the self: Implications for cognitive science. Trends in Cognitive Sciences, 4(1), 14–21.

[9] Gunkel, D. J. (2012). The Machine Question. MIT Press.

[10] Locke, J. (1975). An Essay Concerning Human Understanding. Oxford University Press. (Original work published 1690)

[11] Metzinger, T. (2009). The Ego Tunnel. Basic Books.

[12] Parfit, D. (1984). Reasons and Persons. Oxford University Press.

[13] Ricoeur, P. (1992). Oneself as Another. University of Chicago Press.

[14] World Economic Forum. (2021). Digital Identity: Towards Shared Principles for the Public and Private Sectors. WEF.

[15] OpenAI. (2024). Introducing memory for ChatGPT. https://openai.com/

[16] Tesla. (2023). Full Self-Driving Beta release notes. https://www.tesla.com/

[17] OpenAI. (2023). GPT-4 technical report. https://openai.com/research

[18] Minsky, M. (1986). The Society of Mind. — foundational on distributed cognition.

[19] Dehaene, S. (2014). Consciousness and the Brain. — supports the idea of integrated global processing.

[20] Russell, S., & Norvig, P. (2021). Artificial Intelligence: A Modern Approach (4th ed.). — standard authority on agent architectures.

[21] Floridi, L. (2011). The Philosophy of Information. — helpful when you discuss self-referential systems as informational entities.

[22] Srivathsan, J. A. (2025). A Systematic Framework for Sensory-Expressive Interfaces (SEI) in Human-Machine Interaction. TIJER – International Research Journal, 12(5), a274–a278. ISSN 2349-9249. https://www.tijer.org

[23] Tulving, E. (1972). Episodic and semantic memory. In Organization of Memory (pp. 381–403). Academic Press.

[24] Classic foundation for your four-layer memory model (episodic, semantic, procedural, affective).

[25] Squire, L. R. (2004). Memory systems of the brain: A brief history and current perspective. Neurobiology of Learning and Memory, 82(3), 171–177.

[26] Defines the biological memory architecture you compare with synthetic systems.

[27] McClelland, J. L., McNaughton, B. L., & O’Reilly, R. C. (1995). Why are there complementary learning systems in the hippocampus and neocortex? Psychological Review, 102(3), 419–457.

[28] Explains consolidation and persistence—conceptually analogous to your “memory continuity.”

[29] Hochreiter, S., & Schmidhuber, J. (1997). Long short-term memory. Neural Computation, 9(8), 1735–1780.

[30] Seminal computational model introducing durable internal states—underpins synthetic memory.

[31] Graves, A., Wayne, G., & Danihelka, I. (2014). Neural Turing Machines. arXiv:1410.5401.

[32] Prototype for external, addressable digital memory—direct conceptual ancestor to iBrain persistence.

[33] Laird, J. E., Lebiere, C., & Rosenbloom, P. S. (2017). A standard model of the mind. Behavioral and Brain Sciences, 40, e195.

[34] Outlines unified cognitive architecture integrating episodic, semantic, and procedural memory—mirrors your synthetic layering.

[35] Floridi, L. (2011). The Philosophy of Information. Oxford University Press.

[36] Provides the philosophical foundation for information continuity, identity, and memory in digital agents.

[37] Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies. Oxford University Press.

[38] Canonical text on alignment, moral agency, and existential risk; directly supports your “tool vs entity” argument.

[39] 2.Russell, S., & Norvig, P. (2021). Artificial Intelligence: A Modern Approach (4th ed.). Pearson.

[40] Includes foundational discussion of agent ethics, utility functions, and human-in-the-loop design; ideal for your “governance and accountability” section.

[41] 3.Floridi, L., & Sanders, J. W. (2004). “On the Morality of Artificial Agents.” Minds and Machines, 14(3), 349–379.

[42] Seminal paper defining moral agency for artificial systems; gives philosophical legitimacy to “embedded ethics.”

[43] 4.Bryson, J. J. (2018). “Patiency Is Not a Virtue: AI and the Design of Ethical Systems.” Ethics and Information Technology, 20(1), 15–26.

[44] Argues that AI should follow designed ethics rather than possess moral rights; great counterpoint to your synthetic personhood argument.

[45] 5.Gunkel, D. J. (2012). The Machine Question: Critical Perspectives on AI, Robots, and Ethics. MIT Press.

[46] Explores whether and how machines can be moral subjects; complements your “liability and responsibility” section.

[47] 6.Jobin, A., Ienca, M., & Vayena, E. (2019). “The Global Landscape of AI Ethics Guidelines.” Nature Machine Intelligence, 1(9), 389–399.

[48] Provides empirical grounding for governance frameworks; helps you situate iBrain ethics within existing international principles.

[49] 7.Winfield, A. F. T., & Jirotka, M. (2018). “Ethical Governance Is Essential to Building Trust in Robots and AI Systems.” Philosophical Transactions of the Royal Society A, 376(2133), 20180085.

[50] Metzinger, T. (2003). Being No One: The Self-Model Theory of Subjectivity. MIT Press.

[51] Canonical philosophical-neuroscientific account of how self-awareness emerges from self-models—directly underpins your definition of iBrain introspection.

[52] Minsky, M. (1986). The Society of Mind. Simon & Schuster.

[53] Classic distributed-cognition model showing how recursive internal agents give rise to self-reflective behavior.

[54] Dehaene, S. (2014). Consciousness and the Brain: Deciphering How the Brain Codes Our Thoughts. Viking.

[55] Provides the “global neuronal workspace” theory explaining awareness as system-wide self-monitoring—perfect biological analogue to your iBrain state-monitoring.

[56] Friston, K. (2010). “The free-energy principle: A unified brain theory.” Nature Reviews Neuroscience, 11(2), 127–138.

[57] Explains adaptive self-modeling and predictive processing—ideal for your “recursive modeling” pillar.

[58] Morin, A. (2006). “Levels of consciousness and self-awareness: A comparison and integration of various neurocognitive views.” Consciousness and Cognition, 15(2), 358–371.

[59] Maps out levels of self-awareness from basic monitoring to reflective introspection—gives taxonomic grounding.

[60] Trafton, J. G., Hiatt, L. M., Harrison, A. M., & Patterson, E. S. (2013). “ACT-R and Meta-Cognition: Modeling Self-Monitoring in Cognitive Architectures.” Cognitive Systems Research, 24, 1–13.

[61] Shows practical implementation of self-monitoring in AI architectures—bridges theory and engineering.

[62] Silver, D., Hubert, T., Schrittwieser, J., et al. (2018). “A general reinforcement-learning algorithm that masters chess, shogi, and Go through self-play.” Science, 362(6419), 1140–1144.

[63] Dey, A. K. (2001). Understanding and using context. Personal and Ubiquitous Computing, 5(1), 4–7.

[64] Russell, S., & Norvig, P. (2021). Artificial Intelligence: A Modern Approach (4th ed.). Pearson.

[65] Suchman, L. A. (1987). Plans and Situated Actions: The Problem of Human-Machine Communication. Cambridge University Press.

[66] Endsley, M. R. (1995). Toward a theory of situation awareness in dynamic systems. Human Factors, 37(1), 32–64.

[67] Hofstadter, D., & Mitchell, M. (1994). The Copycat project: A model of mental fluidity and analogy-making. In Advances in Connectionist and Neural Computation Theory (Vol. 2, pp. 31–112).

[68] Floridi, L. (2011). The Philosophy of Information. Oxford University Press.

[69] Herculano-Houzel, S. (2009). The human brain in numbers: a linearly scaled-up primate brain. Frontiers in Human Neuroscience, 3(31), 1–11.

[70] Kaplan, J., McCandlish, S., Henighan, T., et al. (2020). Scaling Laws for Neural Language Models. arXiv:2001.08361.

[71] Bubeck, S., et al. (2023). Sparks of Artificial General Intelligence: Early Experiments with GPT-4. arXiv:2303.12712.

[72] Kali, H. (2024) ‘Optimizing Credit Card Fraud Transactions identification and classification in banking industry Using Machine Learning Algorithms’, International Journal of Recent Technology Science & Management, 9(11), pp. 85–96.

[73] Majumder, R.Q. (2025) ‘Machine Learning for Predictive Analytics: Trends and Future Directions’, International Journal of Innovative Science and Research Technology, 10(4), pp. 3557–3564. Available at: https://doi.org/10.38124/ijisrt/25apr1899.

[74] Malali, N. (2025) ‘AI Ethics in Financial Services: A Global Perspective’, International Journal of Innovative Science and Research Technology, 10(2), pp. 2456–2165. Available at: https://doi.org//10.5281/zenodo.14881349.

[75] Patel, D. (2023) ‘Leveraging Blockchain and AI Framework for Enhancing Intrusion Prevention and Detection in Cybersecurity’, Technix International Journal for Engineering Research, 10(6), pp. 853–858. Available at: https://doi.org/10.56975/tijer.v10i6.158517.

[76] Patel, R. and Patel, P. (2024) ‘Machine Learning-Driven Predictive Maintenance for Early Fault Prediction and Detection in Smart Manufacturing Systems’, ESP Journal of Engineering & Technology Advancements, 4(1), pp. 141–149. Available at: https://doi.org/10.56472/25832646/JETA-V4I1P120.

[77] Prajapati, N. (2025) ‘Federated Learning for Privacy-Preserving Cybersecurity: A Review on Secure Threat Detection’, International Journal of Advanced Research in Science, Communication and Technology, 5(4), pp. 520–528. Available at: https://doi.org/10.48175/IJARSCT-25168.

[78] Prajapati, V. (2025) ‘Exploring the Role of Digital Twin Technologies in Transforming Modern Supply Chain Management’, International Journal of Science and Research Archive, 14(03), pp. 1387–1395.

Keywords:

Synthetic Personhood, Artificial Identity, iBrain, Synthetic People Trilogy, Self-awareness, Cognitive Architecture.