In computer vision, many algorithms which have been developed do not take this integrated view into account. Algorithms are usually run on a set of data and then one obtains some kind of output, for instance a three-dimensional reconstruction of a scene. Although such an output is desirable for many different applications, this does not help very much in understanding how the visual apparatus processes information. In carrying out this research, we build prototype systems which continuously process information. We conduct research at the intersection between image understanding, computer vision, computer graphics, and evolutionary algorithms. When developing algorithms in computer vision it is usually a good idea to have a look at how nature solves a problem. For instance, some very successful object recognition algorithms are actually quite simple and can be mapped to the function of the human visual system. Biologically inspired systems help us to better understand how the organism works. They also lead to the development of simple yet efficient algorithms.
Machine Consciousness:
With the beginning of human level performance of large language models it will be interesting to find out whether or not such models are conscious or eventually will become conscious. Our view is that consciousness is based on communication (Ebner, 2022). A brief summary has been given in this talk, at Oxford University in 2019. A perception or quale arises due to the mathematical structure of the space. The quale color is actually an estimate of the reflectance of the object that is being viewed. We assume that this also holds for other types of qualia.
Color Constancy:
We have conducted extensive research in the area of color constancy ( Ebner and Hansen, 2013; Ulucan, Ulucan and Ebner, 2022; Ulucan, Ulucan and Ebner, 2023). We have developed a computational algorithm for color perception which can be mapped to the human visual system (Ebner, 2009). This algorithm is able to explain why color constancy performance varies whenever an object moves (Ebner, 2012). Our research also looks at recent advancements by neurologists and psychologists. Neurologists measure the response characteristics of individual neurons or neural assemblies. Psychologists conduct experiments by presenting different stimuli and questioning the subjects in an effort to learn about their perception. We use their results to validate our algorithms.
Evolutionary Computer Vision:
In developing algorithms for machine intelligence it should be noted that to date, natural evolution is the only known process which is known to have produced intelligent behavior. Therefore, one focus of our research is to apply evolutionary methods to the generation of algorithms for machine intelligence.
We have been working on the development of a learning, self-adaptive vision system. This vision system uses evolutionary algorithms to automatically search the space of image processing algorithms to generate detectors. This system currently uses one cue (motion) to evolve detectors which also work when this cue is not available (Ebner, 2010; Ebner, 2009; Ebner, 2008).
Modeling of Lateral Interactions between Neurons:
Since 2009, we have been collaborating with Stuart Hameroff, Professor Emeritus, Departments of Anesthesiology and Psychology,Director, Center for Consciousness Studies at the University of Arizona, to work on the problem of Machine Consciousness. We have modeled assemblies of spiking neurons which are laterally connected via dendritic gap junctions. Using the lateral coupling between neurons we were able to show how such assemlies are able to perform figure/ground separation (Ebner and Hameroff, 2011c; Ebner and Hameroff, 2011b; Ebner and Hameroff, 2011a).