Cole Cohen ,first came to the attention of Valerie Prentis ,coordinator of Parasol Picture's "Straight to Cortex" Tv experimental Television Division* in 2010 when Valerie's assistant showed her a blog she had come across that seemed to implicate Parasol Picture's Publishing and Media Division of promoting a strange perchance for hiring only a certain type of author to write Young Adult novels .
"Vampires ," Cole wrote,"are the new "Greed is Good" for the under 16 set...
I remember in 1986 ,simply knowing that "Greed was Bad"
but by the end of 1987
"this was ..no longer to be "The Style of the Times"
Why,Cole asked ,are certain themes being introduced in YA literature ...notably 'that it is GOOD to be TAKEN"
.it was "good to be part of a Cabal'...it was good to be mesmerized.
Valerie ,at first assumed Cole Cohen to be "one of those Christian Crusader" types.,than re-reading the piece Valerie told her assistant ,' oh, he's just some kike writing for some local City paper" -but let's keep an eye on him -anyway..those are Tricky Business...always were always will be"
"Vampires ," Cole wrote,"are the new "Greed is Good" for the under 16 set...
I remember in 1986 ,simply knowing that "Greed was Bad"
but by the end of 1987
"this was ..no longer to be "The Style of the Times"
Why,Cole asked ,are certain themes being introduced in YA literature ...notably 'that it is GOOD to be TAKEN"
.it was "good to be part of a Cabal'...it was good to be mesmerized.
Valerie ,at first assumed Cole Cohen to be "one of those Christian Crusader" types.,than re-reading the piece Valerie told her assistant ,' oh, he's just some kike writing for some local City paper" -but let's keep an eye on him -anyway..those are Tricky Business...always were always will be"
** Thursday, October 24, 2013 Jim Bloom Small Life Video ...
*While most closed-loop Brain Computer Interface systems provide feedback to the user on system performance through the presentation of sensory (primarily visual) information, approaches have also been developed to provide sensory feedback through direct stimulation of the nervous system.(see Géléoc and Holt, 2014 and Chuang et al., 2014 for review). Recent explorations provide a means of conveying somatosensory sensation of touch, temperature, pain, and vibration to participants in several cross demographic studies (Hebert et al., 2013).
Sensory percepts can also be elicited through direct brain stimulation (Schiller et al., 2011, Kar and Krekelberg, 2012, Larson and Cheung, 2012, Tabot et al., 2013, Zaaimi et al., 2013, May et al., 2013 and Johnson et al., 2013). Such findings provide a proof of induced sensory feedback into BCI systems.(e.g., see O’Doherty et al., 2011). Studies suggest that neural stimulation may even have the potential alter behaviors through modulation of molecular mechanisms of synaptic efficacy (Jacobs et al., 2012, Rahman et al., 2013 and Song et al., 2013
Sensory percepts can also be elicited through direct brain stimulation (Schiller et al., 2011, Kar and Krekelberg, 2012, Larson and Cheung, 2012, Tabot et al., 2013, Zaaimi et al., 2013, May et al., 2013 and Johnson et al., 2013). Such findings provide a proof of induced sensory feedback into BCI systems.(e.g., see O’Doherty et al., 2011). Studies suggest that neural stimulation may even have the potential alter behaviors through modulation of molecular mechanisms of synaptic efficacy (Jacobs et al., 2012, Rahman et al., 2013 and Song et al., 2013
(2011)Scientists Reconstruct Brains' Visions Into Digital Video In Historic Experiment
UC Berkeley scientists have developed a system to capture visual activity in human brains and reconstruct it as digital video clips. Eventually, this process will allow you to record and reconstruct your own dreams on a computer screen.
Professor Jack Gallant—UC Berkeley neuroscientist and coauthor of the research published today in the journal Current Biology says"this is a major leap toward reconstructing internal imagery. We are opening a window into the movies in our minds."
They used three different subjects for the experiments—incidentally, they were part of the research team because it requires being inside a functional Magnetic Resonance Imaging system for hours at a time. The subjects were exposed to two different groups of Hollywood movie trailers as the fMRI system recorded the brain's blood flow through their brains' visual cortex.
The readings were fed into a computer program in which they were divided into three-dimensional pixels units called voxels (volumetric pixels). This process effectively decodes the brain signals generated by moving pictures, connecting the shape and motion information from the movies to specific brain actions. As the sessions progressed, the computer learned more and more about how the visual activity presented on the screen corresponded to the brain activity.
An 18-million-second picture palette
After recording this information, another group of clips was used to reconstruct the videos shown to the subjects. The computer analyzed 18 million seconds of random YouTube video, building a database of potential brain activity for each clip. From all these videos, the software picked the one hundred clips that caused a brain activity more similar to the ones the subject watched, combining them into one final movie. Although the resulting video is low resolution and blurry, it clearly matched the actual clips watched by the subjects.
Think about those 18 million seconds of random videos as a painter's color palette. A painter sees a red rose in real life and tries to reproduce the color using the different kinds of reds available in his palette, combining them to match what he's seeing. The software is the painter and the 18 million seconds of random video is its color palette. It analyzes how the brain reacts to certain stimuli, compares it to the brain reactions to the 18-million-second palette, and picks what more closely matches those brain reactions. Then it combines the clips into a new one that duplicates what the subject was seeing. Notice that the 18 million seconds of motion video are not what the subject is seeing. They are random bits used just to compose the brain image.
Given a big enough database of video material and enough computing power, the system would be able to re-create any images in your brain.
In this other video you can see how this process worked in the three experimental targets. On the top left square you can see the movie the subjects were watching while they were in the fMRI machine. Right below you can see the movie "extracted" from their brain activity. It shows that this technique gives consistent results independent of what's being watched—or who's watching. The three lines of clips next to the left column show the random movies that the computer program used to reconstruct the visual information.
Right now, the resulting quality is not good, but the potential is enormous. Lead research author—and one of the lab test bunnies—Shinji Nishimoto thinks this is the first step to tap directly into what our brain sees and imagines:
Our natural visual experience is like watching a movie. In order for this technology to have wide applicability, we must understand how the brain processes these dynamic visual experiences.
Eventually it will be possible to capturing your visual memories, your dreams, the wild ramblings of your imagination into a video that you and others can watch with your own eyes.
UC Berkeley scientists have developed a system to capture visual activity in human brains and reconstruct it as digital video clips. Eventually, this process will allow you to record and reconstruct your own dreams on a computer screen.
Professor Jack Gallant—UC Berkeley neuroscientist and coauthor of the research published today in the journal Current Biology says"this is a major leap toward reconstructing internal imagery. We are opening a window into the movies in our minds."
They used three different subjects for the experiments—incidentally, they were part of the research team because it requires being inside a functional Magnetic Resonance Imaging system for hours at a time. The subjects were exposed to two different groups of Hollywood movie trailers as the fMRI system recorded the brain's blood flow through their brains' visual cortex.
The readings were fed into a computer program in which they were divided into three-dimensional pixels units called voxels (volumetric pixels). This process effectively decodes the brain signals generated by moving pictures, connecting the shape and motion information from the movies to specific brain actions. As the sessions progressed, the computer learned more and more about how the visual activity presented on the screen corresponded to the brain activity.
An 18-million-second picture palette
After recording this information, another group of clips was used to reconstruct the videos shown to the subjects. The computer analyzed 18 million seconds of random YouTube video, building a database of potential brain activity for each clip. From all these videos, the software picked the one hundred clips that caused a brain activity more similar to the ones the subject watched, combining them into one final movie. Although the resulting video is low resolution and blurry, it clearly matched the actual clips watched by the subjects.
Think about those 18 million seconds of random videos as a painter's color palette. A painter sees a red rose in real life and tries to reproduce the color using the different kinds of reds available in his palette, combining them to match what he's seeing. The software is the painter and the 18 million seconds of random video is its color palette. It analyzes how the brain reacts to certain stimuli, compares it to the brain reactions to the 18-million-second palette, and picks what more closely matches those brain reactions. Then it combines the clips into a new one that duplicates what the subject was seeing. Notice that the 18 million seconds of motion video are not what the subject is seeing. They are random bits used just to compose the brain image.
Given a big enough database of video material and enough computing power, the system would be able to re-create any images in your brain.
In this other video you can see how this process worked in the three experimental targets. On the top left square you can see the movie the subjects were watching while they were in the fMRI machine. Right below you can see the movie "extracted" from their brain activity. It shows that this technique gives consistent results independent of what's being watched—or who's watching. The three lines of clips next to the left column show the random movies that the computer program used to reconstruct the visual information.
Right now, the resulting quality is not good, but the potential is enormous. Lead research author—and one of the lab test bunnies—Shinji Nishimoto thinks this is the first step to tap directly into what our brain sees and imagines:
Our natural visual experience is like watching a movie. In order for this technology to have wide applicability, we must understand how the brain processes these dynamic visual experiences.
Eventually it will be possible to capturing your visual memories, your dreams, the wild ramblings of your imagination into a video that you and others can watch with your own eyes.
*While most closed-loop Brain Computer Interface systems provide feedback to the user on system performance through the presentation of sensory (primarily visual) information, approaches have also been developed to provide sensory feedback through direct stimulation of the nervous system.(see Géléoc and Holt, 2014 and Chuang et al., 2014 for review). Recent explorations provide a means of conveying somatosensory sensation of touch, temperature, pain, and vibration to participants in several cross demographic studies (Hebert et al., 2013).
Sensory percepts can also be elicited through direct brain stimulation (Schiller et al., 2011, Kar and Krekelberg, 2012, Larson and Cheung, 2012, Tabot et al., 2013, Zaaimi et al., 2013, May et al., 2013 and Johnson et al., 2013). Such findings provide a proof of induced sensory feedback into BCI systems.(e.g., see O’Doherty et al., 2011). Studies suggest that neural stimulation may even have the potential alter behaviors through modulation of molecular mechanisms of synaptic efficacy (Jacobs et al., 2012, Rahman et al., 2013 and Song et al., 2013
Sensory percepts can also be elicited through direct brain stimulation (Schiller et al., 2011, Kar and Krekelberg, 2012, Larson and Cheung, 2012, Tabot et al., 2013, Zaaimi et al., 2013, May et al., 2013 and Johnson et al., 2013). Such findings provide a proof of induced sensory feedback into BCI systems.(e.g., see O’Doherty et al., 2011). Studies suggest that neural stimulation may even have the potential alter behaviors through modulation of molecular mechanisms of synaptic efficacy (Jacobs et al., 2012, Rahman et al., 2013 and Song et al., 2013
No comments:
Post a Comment