much robotics can be done via simulation

6.24: addx/robotics/much can be done via simulation:

. when video is recognized,
ie, tells us what's happening in a world,
it's because we are
parsing a matrix of pixels
into a theatre of actors and props,
as we understand them to exist from experience .
. the list of actors and their positions
is what the robotic intelligence algorithms
can work with;
so, having a lot of practice with
programming this symbol-binding transform:
pixel matrix -> actor set configuration
is how a desk engineer can
contribute to robotics .

. the robot has a working model or world
like the database of a game
(a 3-D simulated world) .
. the main test needed
is to have a game that makes
vid's of your 3-D world,
and then see how fast and completely
your transform can recreate that 3-D world
from that vid .
. before thinking that is ok
we need to deal with how
real vid's are fuzzy and smeared .
. this poor quality of real-time vid
is why the algorithms need to find
the edges of fuzzy masses,
and identify joint points
among the shape's changes .
. this can't be done with
scanning one frame at a time,
there must be a quick way to show how
2 frames differ from eachother:
only movement can reveal movable masses
among the camoflage of noise .
. if the quality of the vid is very high,
then instead of having to
analyze an entire stream differential,
it could study just one snapshot .

. once the id is bound to a database obj,
then most analysis time can be spent on
reaffirming the actor's position and posture:
eg, which way is it facing,
what vector is the movement following, etc .

. just as a pixel mass is bound to an actor,
joint points are bound via outline of mass,
changes in joint points are easily tracked that way
and are key to identifying
potential and current actions .
. another analysis
-- that can be done concurrently --
is finding new objects .
. if the robot's scan of area
can't be persistent
or an obj's trail is lost
the actors have to be sortable by
spheres of possible location .