Autonomous problem solving and decisions
A multi-disciplinary approach using human learning theory and neurosciences to model a novel architecture to replace deep learning. We replicate human ‘intuitive’ problem solving rather than logical reasoning. We model human pattern recognition, mental models, and implicit conceptual systems for rapid decision in uncertain environments. Field trials include predicting and supporting human decisions.
Rather than seeing human interaction as a ‘question/answer’ problem, we believe life is a river of state. True meaning is found in implicit inference , rather than explicit statements. Our approach is called 3-dimensional Natural Human Understanding, and is the generation past NLP. We use new techniques like tethering, decision fulcrums, and goal-state triage. Fields trials are in interactive companion AI, across multiple embodiments (voice, text, social robots in homes, avatars in augmented and virtual environments).
At Akin, we don’t just talk about ethical AI, we do it, We do not simply form committees and talk about policy and guidelines. We set it into action, measure outcomes, rework our theories; and go out and do it again.
If all we can do is set a defined goal for an AI, and tell it to find the best path to that precise goal, we will never have AI able to function in changing and imperfect environments.
We are developing approaches to measure efficacy in uncertain environments, to open a path for autonomous self-improvement of a system with an AI decision core.
State change. We are developing metrics to track the change of state of human as a result of working or interacting with an AI, to create frameworks for ethical co-evolution.
Epigenesis Feedback Learning
AI cannot learn to reason like a human by sitting in a black box learning the perfect answer to a single question. In human evolution and learning, sensor/effector feedback loops are a critical element of awareness and conceptual reasoning.
We take a multidisciplinary approach merging human learning theory and current ML approaches. Field trials include embodied systems in homes and experimental habitat.
SOME OF THE QUESTIONS WE ARE WORKING ON:
What will the future of AI - human experience look like?
How will people make decisions and solve complex problems in this future world?
How will people run their homes, their health, their relationship with food, money, experiences, and other people? How will it change after Personal AI?
What are alternate approaches to AI, and how do we measure true efficacy? Which AI approaches will gain dominance?
How do we ensure AI bring about a positive change in individuals and society? How do we apply this in a practical way?
What devices, or combination of mediums will become the primary medium? Chat, voice, augmented reality, biosensory awareness, or background ‘ambient’ prediction with minimal interaction?