Aquarian Steady State theorem

12.31: relig/aquarian/ElectroMagnetic Relationism
. for the dawning of the Aquarian Age,
I'm publishing a new version of 1992's
Physics/eternity/How a Star is Created:

. the vanilla "Steady State" theorem was disproved;
but, it's a perfect surname for a cosmology that
 supports the Final Anthropic Principle;
so, in honor of the 2012 festivities,
my cosmology will be referred to as
the Aquarian Steady State theorem .

the Screw U. principle:
. a new twist on developing this subject
is the theory that all real research on it
is stifled because it inexorably leads to
an understanding of Directed Energy Weaponry .


DARPA's $34M 1st-responder co-robots #messiah Stimulus

12.8: news.adds/robotics/
DARPA's $2M 1st-responder co-robots Challenge:
. faq and forums @ theroboticschallenge.org ...

2011.6: President Obama launches NRI:
. NRI (National Robotics Initiative) ... .

National Robotics Initiative

12.8: news.adds/robotics/NRI(National Robotics Initiative):

2011.6: President Obama launches NRI:
. the National Robotics Initiative
is funding the development of co-robots .
. the development and use of co-robots,
defined as robots that are co.workers;
ie, serving as member of in a team with employees,
or helping the physically or mentally impaired
-- in a symbiotic relationship with humans .
. supporters of co-robot development include:
National Science Foundation (NSF),
National Aeronautics and Space Administration (NASA),
National Institutes of Health (NIH),
and the U.S. Department of Agriculture (USDA).
. this program develops co-robots,
the next generation of robotics,
to advance the capability and usability
of such systems and artifacts,
and to encourage existing and new communities
to focus on innovative application areas.
It will address the entire life cycle
from fundamental research and development
to manufacturing and deployment.
. important parts of this initiative include
the establishment and infusion of robotics
in educational curricula and research
to gain a better understanding of the long term
social, behavioral and economic implications
of using co-robots across all areas of human activity .

Open Source Robotics Foundation, Inc

12.8: news.adds/robotics/Open Source Robotics Foundation, Inc
2012.4.10, 30: DRC (Robotics Challenge) Broad Agency Announcement:
GFE Simulator is expected to be provided by the
Open Source Robotics Foundation, Inc.,
and will initially be based on the ROS Gazebo simulator.
Expectations for the GFE Simulator include the following:
• Models the three-dimensional environment;
• Allows import of robot's
kinematic, dynamic, and sensor models;
• Allows robot's co.workers to
send commands over a network
(identical to those sent to a physical robot)
and receive data from the simulated robot
(similar to that received from a physical robot);
• Uses physics-based models of
inertia, actuation, contact, and environment dynamics
to simulate the robot’s motion;
• Runs in real-time on the “cloud,”
likely on Graphics Processing Units (GPUs);
• usable from the cloud
by at least 100 concurrent teams
The GFE Simulator supplier
will manage an open-source effort
where the simulator, robot models,
and environment models
are developed and improved by the supplier
as well as by contributors throughout the world.
2012.4.12: OSRF @ fbo.gov:
Sole Source Intent Notice
Open-Source Robot SImulation Software
for DARPA Robotics Challenge
Solicitation Number: DARPA-SN-12-34

The Defense Advanced Research Projects Agency,
Tactical Technology Office (TTO),
intends to award a sole source contract to the
Open Source Robotics Foundation, Inc.,
of Menlo Park, California.
For the contract, it will develop an open-source
robot simulation software system
for use by the DARPA Robotics Challenge program.
The effort will develop validated models of
robots and field environments  .
The effort will make the simulation software
available on an open-source basis,
and will host the simulation so that
participants in the DARPA Robotics Challenge program
can access it freely.
To search for innovative approaches to provide this service,
DARPA performed an informal market survey.
Of the few existing candidate open-source robot simulators,
the Open Source Robotics Foundation was deemed to be
the sole viable supplier for providing the
necessary open-source simulation software
within the specified timeframe.
The Open Source Robotics Foundation simulator, called Gazebo,
simulates many robots and sensors,
supports the ROS and Player robot control middleware,
and maintains tutorials, documentation
and an active user base.
The technology underlying Gazebo
supports many use cases
including remote operation
through a client-server architecture,
customization via a flexible plugin interface,
accurate physics simulation,
and rapid setup through intuitive
graphical and programmatic interfaces.
The proposed contract action is for
supplies or services for which the Government
intends to solicit and negotiate with
only one source under authority of FAR 6.302-1
"Only one responsible source
and no other supplies or services
will satisfy agency requirements."
As the open-source robot simulation leader,
the Open Source Robotics Foundation possesses
unique knowledge and capabilities required to
carry out the required research effort.
No other source would be capable of
satisfying the requirements
for an affordable end-to-end solution
necessary to meet the Government's needs.
2012.4.16: OSRF @ willowgarage:
The history of open source is replete with
significant moments in time,
and today Willow Garage would like to humbly submit
our own milestone -- the announcement of
the Open Source Robotics Foundation.
As Willow Garage has worked to grow and shepherd ROS
within the robotics world
our hope was that ROS would one day stand on its own.
With the announcement today of the OSR Foundation,
that day is finally here.
The mission of the OSR Foundation is "to support
the development, distribution, and adoption
of open source software for use in robotics
research, education, and product development."
The first initiative of the OSRF will be participation in
the DARPA Robotics Challenge, announced recently.
DARPA today sponsored a Proposer's Day Workshop
where more information about the Robotics Challenge
is available via Webcast.
During the Webcast, Nate Koenig from Willow Garage
gave a brief talk on the current and future state
of the open source Gazebo robot simulator,
which will be extended by the OSR Foundation
to support the DARPA Robot Challenge.
2012.4.17: Gazebo @ spectrum.ieee.org:
You're likely already familiar with the
Robot Operating System, or ROS,
in relation to Willow Garage's PR2 robots.
A few years ago, Willow Garage
integrated ROS and the PR2 into Gazebo,
a multi-robot simulator project
started at the University of Southern California
by Andrew Howard and Nate Koenig.
Willow Garage now provides financial support
for the development of Gazebo.
. ROS is now mature enough to go off and fend for itself,
and the Open Source Robotics Foundation
is the shiny new embodiment of that confidence.
DARPA isn't awarding the simulator contract to
Willow Garage or Gazebo's org,
but to a new foundation dedicated to the
full lifecycle of Robotics programming .
. here's what DARPA is expecting:
    The Open Source Robotics Foundation
    will develop an open-source
robot simulation software system
    for use by the DARPA Robotics Challenge program.
    The effort will develop validated models of robots
    (kinematics, dynamics, and sensors)
    and field environments (three-dimensional
    surfaces, solids, and material properties).
    The effort will develop physics-based models of
    inertia, actuation, contact, and environment dynamics
    to simulate the robot's motion.
    . available on an open-source basis,
    and hosted so that participants in the
    DARPA Robotics Challenge program
    can access it freely.
Open Source Robotics Foundation`sponsored projects:
. Gazebo is a 3D multi-robot simulator with dynamics.
. ROS (Robot Operating System) provides
hardware abstraction, device drivers, libraries, visualizers,
message-passing, package management, and more.
. Paradise Studios SkyX(open version) provides a
photorealistic, simple and fast sky rendering plugin
for the Ogre3D graphics engine,
compatible with Gazebo .
. Other open-source activities
that are related to the mission of OSRF
include the Arduino platform and the Make database.
. sponsors include:
# Bosch's Research and Technology Center (RTC)
has the distinction of being the only
non-university recipient of a Willow Garage PR2
as part of the PR2 Beta Program
[for hardware-testing with the ROS .]
# DARPA (Defense Advanced Research Projects Agency)
is funding the Gazebo robot simulator,
which will be used extensively in the
DARPA Robotics Challenge.
# Rethink Robotics's Baxter robot uses ROS .
# Sandia National Laboratories is
collaborating with OSRF to develop
low-cost, highly dexterous robot hands.
# Willow Garage is a Founding Contributor to OSRF.
Without Willow Garage's financial,
organizational, and moral support,
OSRF could never have become
the company that we are today.
# Yujin Robot is collaborating with OSRF,
to develop tools for authoring, and management of
multi-robot systems in production environments .
2012.7: OSRF will concentrate on Gazebo:
. we're planning to put a lot of work into
the Gazebo simulator and ROS,
significantly improving the capability
and availability of robot simulation .
. we can have greater impact by
taking a "deep dive" in one area
rather than taking shallow responsibility for everything.
Over time, we'll contribute to a broader variety of projects,
mostly drawn from the ROS ecosystem.
. a ROS certification program may be
most valuable in a domain-specific manner.
eg, imagine a way of certifying a
ROS-controlled industrial robot arm
as being compatible with an accepted
ROS interface standard
(perhaps crafted by the forthcoming
ROS Industrial Consortium .
Similarly, imagine a TurtleBot being certified
as implementing REP 119 .
. compatibility should be the focus;
certifying functionality is an entirely different topic .
. the ROS Platform development
continues to be managed by Willow Garage,
but responsibility for release management
will transition to OSRF eventually .
DRC/Roadmap > Tutorials > Main Page > Overview:
Gazebo is a multi-robot simulator for outdoor environments.
Like Stage (part of the Player project),
it is capable of simulating a population of
robots, sensors and objects,
but does so in a three-dimensional world.
It generates both realistic sensor feedback
and physically plausible interactions between objects
(it includes an accurate simulation of rigid-body physics).
Why use Gazebo?
Gazebo was originally designed to develope
algorithms for robotic platforms.
By realistically simulating robots and environments
code designed to operate a physical robot
can be executed on an artificial version.
This helps to avoid common problems
associated with hardware
such as short battery life, hardware failures,
and unexpected and dangerous behaviors.
It is also much faster to
spin up a simulation engine
than continually run code on robot,
especially when the simulation engine
can run faster than real-time.
Over the years Gazebo has also
been used for regression testing.
Scenarios designed to test algorithm functionality
have been established and passed through test rigs.
These tests can be run continually
to maintain code quality and functionality.
Numerous researchers have also used Gazebo
to develop and run experiments
solely in a simulated environment.
Controlled experimental setups
can easily be created in which subjects can
interact with robots in a realistic manner.
Gazebo has been used to compare algorithms
for such things as navigation
and grasping in a controlled environment.
Gazebo is under active development at Willow Garage.
We are continually fixing bugs,
and adding new features.
If you have feature requests, need help,
or have bugs to report
please refer to the support page .
Gazebo is a feature rich application that is
under constant development from a large user community.
The follow lists a few of the primary features offered by Gazebo.
Dynamics Simulation
    Access multiple physics engines including ODE and Bullet.
    Direct control over physic engine parameters
    including accuracy, performance, and dynamic properties
    for each simulated model.
Advanced 3D Graphics
    Utlizing OGRE, Gazebo provides realistic
rendering of environments.
    State-of-the-art GPU shaders
generate correct lighting and shadows
    for improved realism.
    Generate sensor information from
    laser range finders, 2D cameras, Kinect style sensors,
    contact sensors, and RFID sensors.
Robot Models
    Many robots are provided including
    PR2, Pioneer2 DX, iRobot Create, TurtleBot,
    generic robot arms and grippers.
    Access to many objects from simple shapes to terrain.
Programmatic Interfaces
    Support for ROS and Player.
    API for custom interfaces.
    Develop custom plugins for robot model control,
    interacting with world components
    Provides direct control to all aspects of
the simulation engine including
    the phyics engine, graphics libraries,
and sensor generation
TCP/IP Communication
    Run Gazebo on remote servers
    Interface to Gazebo through
socket-based message passing
    using Google Protobufs.
Powerful Graphical Interface
    A lightweight QT based graphical interface
    provides users with direct control over
    many simulation parameters.
    View and navigate through a running simulation.
Collada Import
    Import meshes from many sources
    using Gazebo's built in Collada reader.
Active User Community
    Research institutes around the world
utilize and contribute to Gazebo.
    Community supported help
through a mailing list, and wiki.
Person simulation
    Replay human motion capture data
in a running simulation.
install on Ubuntu Linux 12.04 (precise):
Setup your computer to accept software from
$ sudo sh -c 'echo "deb http://packages.ros.org/ros/ubuntu
precise main" > /etc/apt/sources.list.d/ros-latest.list'
$ sudo sh -c 'echo "deb http://packages.osrfoundation.org/gazebo/ubuntu
precise main" > /etc/apt/sources.list.d/gazebo-latest.list'

Retrieve and install the keys:
$ wget http://packages.ros.org/ros.key -O - | sudo apt-key add -
$ wget http://packages.osrfoundation.org/gazebo.key -O - | sudo apt-key add -

Update apt-get and install Gazebo:
sudo apt-get update
sudo apt-get install gazebo

source: drc/ubuntu/pool/main/g/gazebo .

seems to speak c++/boost:
Create a model plugin that subscribes to a ROS topic:
namespace gazebo {     
  class ROSModelPlugin : public ModelPlugin   { ... .
-- LOOK C++ !!!
. this may be why DARPA said
Gazebo is only a tentative choice;
they might have plans to translate all this work
over to some better programming language
supporting the new systems in the pipeline
that feature improved massive concurrency
based on very energy-efficient chips ... .

darpa's new project is UHPC 2010:
(Ubiquitous High Performance Computing)
The UHPC program seeks to develop
the architectures and technologies
that will provide the underpinnings,
framework and approaches
for improving power consumption, cyber resiliency,
and programmer productivity,
while delivering a thousand-fold increase
in processing capabilities.
2012.6.25: UHPC details:
. DARPA, in 2010, selected four "performers"
to develop prototype systems for its UHPC program:
Intel, NVIDIA, MIT, and Sandia National Laboratory.
. NVIDIA is teaming with Cray [designer of Chapel.lang],
Oak Ridge National Laboratory and others
to design its ExtremeScale prototype.
NVIDIA was not previously seen as
the cutting-edge of supercomputing,
[but GPU's are apparently the future
of massive concurrency ].
. Intel's 2010 revival of Larrabee,
as the Many Integrated Core (MIC) processor,
may be of use in its UHPC designs .
. Intel's prior concurrency tools include Ct language,
Threading Building Blocks, and Parallel Studio .
The first UHPC prototype systems
are slated to be completed in 2018.
. single-cabinet UHPC systems will need to deliver
a petaflop of High Performance Linpack (HPL)
and achieve an energy efficiency of at least
50 gigaflops/watt (100 times more efficient
than today's supercomputers.)
. the programmer should be able to implicitly
implement parallelism (without using MPI
or any other communication-based mechanism).
In addition, the operating system for these machines
has to be "self-aware,"
such that it can dynamically manage performance,
dependability and system resources.

open robotics university

12.8: news.adds/robotics/openroboticsuniversity:
co.g'+#Open Robotics University:
11:10 AM
*Can anyone in this community [g'+ ubuntu] write C++ or Java code that will be
robust and stable enough for an
advanced robotics platform?*
Please visit our website
Open Robotics University
(world's first tuition-free
engineering degree-granting university)
to view the specification for
DARPA Robotics Challenge
and National Robotics Initiative Robots
that we are working on.
Since we are open-source university,
any individual can join our build team.
The winning prize for the competition is
*Please review the specification for ORU-DRD
to start generating discussion on our
“Open-Source Robotics” community page .
Here is a link to DARPA page.
Also see a link to Kawada Industries based in Japan.
We have been in talk with Kawada to provide us the
HRP-3 Promet MK-II platform for our research
to build our own version to meet DARPA goals.
I think we can win the $2, 000000.00 prize.
*Proposed Specification of ORU-DRR Robot:* 
* The robot’s joints axes, electric devices, gear unit and power unit
should be liquid and dust proof.
*The robot should be about 6ft tall and weighs about 160 pounds.
* The inner frame should be aluminum
and the outer body wrap in fire resistance composite. 
* Multi-finger hands for effective handling of tools.
*   Integrate smooth gait generation and slip detection technology
to enable robot to walk on low-friction surface.
 *Sophisticated whole body movement to be realized by
 AIST-developed “generalized ZMP” technology.
 * 42 Degrees of freedom – Head: 2 axes (pitch and yam);
 Arm: 7 axes (shoulder: 3, elbow: 1, wrist: 3) X 2;
 Hand: 6 axes x2; Waist 2 axes (pitch and yaw); Leg 6 axes x2.
 * Vision sensor: 4-lens stereo camera system for autonomous control;
 4-lens stereo camera for remote control; scanning range finder.
 * Control system: Distributed control system;
 Network: Can (Controller Area Network – Mbps).
Open Robotics University
. the first tuition-free engineering
degree-granting institution.
All of our university operations
from student admission
to teaching and learning processes
are powered by Google+
Our university is based in the USA;
and currently seeking accreditation
from accreditation agencies in the US.
Our Goal:
get technically oriented individuals,
who due to their socio-economic status
don't have funds to study robotics engineering;
learn to design and build robust, intelligent robots
for a university robotics engineering degree.
Students: Our students are located in
forty five cities in twenty six countries
-- pre-university through doctorate degrees.
President, Dr. Osato Osemwengie,
Dr. Osemwengie Lives in
Dublin, Ohio, United States.
with Mark Leon of
NASA Robotics Alliance Project
He served as FIRST robotics coach
and mentor to three teams.
He was a presenter at the 2008 Atlanta, Georgia
FIRST Robotics World Championship.
He has completed
four masters and a doctorate degree.
His experience, while attempting to
complete fifth and six masters degree
in software engineering and
electrical engineering respectively,
influenced him to open
The Open Robotics  University.
He strongly believe that future engineers
should not be subjected to outdated text books
and high cost tuition in this digital age.
Any dedicated individual
can receive high quality education
via social media infrastructure such Google+.
Open Robotics University
applied to participate in
National Robotics Initiative.
Our DARPA Challenge Robotics Platform!!
Open Robotics University applied to participate in
tracks C and D of the DARPA Robotics Challenge.
DARPA’s goal is to accelerate the development of robots
that are robust enough to be used for
disaster response operations in hazardous environment.
The robot should also be
intelligent enough to navigate around
doors, chairs, manipulate vehicles,
power tools and recognize levers and valves.
To meet this goal we have adopted
Kawada industries’ HRP-3 MK-II Robotics platform
for research to build our robot ORU-DRR Robot.
Since we are an open-source university,
talented individuals that are not members of our faculty
or not enrolled as a student in our university
worldwide can join our Open Robotics University team
to design software and robot for the competition
(Check out the proposed specification of ORU-DRR below) .

National Robotics Initiative:
The goal is to “accelerate the development
and use of robots in the United States
that work beside, or cooperatively with people”.
To meet this goal we have adopted
Kawada industries’ HRP-4 Robotics platform
for research to build our robot ORU-NRI Robot
(Open Robotics University
National Robotics Initiative Robot).

OpenRov robotics plaform:
. openrov.com is to conduct research
and build world-class oceanic robots
to explore the world's oceans
similar to work done by liquidr.com robotics.
mis.adds/openroboticsuniversity/Responsibility Project:
Make a Difference Robotics Projects
These type of projects as Deitel claims, in his book
"C++ How To Program" 2012,
are "meant to increase awareness and discussion
of important issues the world is facing.
We hope you'll approach them with your own
values, politics, and beliefs".
see more Science & Technology projects
from the The Responsibility Project
. Welcome to The Responsibility Project,
a place to think about—and discuss—

what it means to do the right thing.
--. that isn't a list of interesting projects;
it's a blog for lecturing to teens .
. and, speaking of Doing The Right Thing,
why are we still using c++ ?

dream theory's tech perfection plan

12.8: adds/relig/dream theory's tech perfection plan:
. welcome to my wealth4all religion .
. you don't even have to believe in it to be saved by it,
but if everyone did believe,
it would save a lot of grief .
. the universe's god has no way of
not choosing bad experiences
(all that's left for the god to do
is making sure the bad comes first
and that some good experiences
are in power to keep civilization alive
during this boot camp of evolution);
but we've already seen
every sort of pain and inequality,
so, any time now,
we certainly could move on to
experiencing pure wealth4all .
. the emotional pains the god gives us
are put there just to provoke the wars
that promote evolution of the technology
that ensures eternal survival beyond sun death;
but, if everyone were to realize
god needs tech not war,
then we could simply volunteer
to put more money into tech .
. of course that would mean less money for
the popular entertaining games such as:
my family is bigger than your family ...
so, it's not a belief without costs;
but non-believers are welcomed to be
dragged through war:
wealth4all is optional
until tech is perfected .
. it's not like god is punishing us for
being uncooperative with the needed
Tech Perfection Plan
but if you did believe that,
the obvious data certainly could
support that view;
however, at this point in the
universe's experience distribution
war is no longer needed for
pinning the pains to the beginning;
but, war is still a backup plan
in case we don't feel it necessary to
divert population expansion funds
towards the Tech Perfection Plan .
. wealth4all: I'm a believer .


concurrency both expressed and automated

8.23: adda/co/finding inherent concurrency automatically:
. how can we analyze a program in order to find
what parts of it can be done concurrently?
. at each step there are implicit inputs and outputs
for step#1, do I know its outputs?
for step#2 do I know its inputs?
(also, when step#1 makes calls,
do any of those calls modify any space that might be
needed by anything step#2 calls? ).
. if step#2's inputs don't intersect step#1's outputs,
then you can do both at the same time .
. to know these things, calls have to be
functional so that program space is obvious .

. when calls have multiple parameters,
the order of their evaluation is random;
which should be tantamount to saying
that parameters in the same tuple
can be run concurrently (co-runnable) .
. this also applies to all tuples that are at
the same level of an expression tree:
eg, in h( f(a,b), g(c,d) ), a,b,c,d are co.runnable .

. a low-level way to compare
subprogramming with coprogramming,
is that while any call could be seen as
a new {process, cpu} activation,
in subprogramming, after making a call,
it agrees to stop working until that call is done .

. often we can find that a group of operations
are all handled by the same type mgt, [11.17:
-- in fact, that is the definition of "(operation):
a subprogram that is outputting
the same type that it's inputting -- ]
so even though they are nested
you can send the whle thing in one call,
and, in that case,
the type mgt will know what's co.runnable .

8.23: todo.adda/co/review parasail's design:
. the parasail lang was able to parallelize implicitly a lot .
(see also comments of this page).

adda/co/explicit concurrency control:

. the abstraction to use for explicit concurrency control
is having a particular cpu called like a function:
cpu#n(subprogram, inputs, outputs)
-- the inputs are a copy of everthing sent to it .
-- outputs are addresses to put results in .

. how is go.lang outputting?
when go.lang says go f(x)
they expect f to either call an outputter
or have x be an outputting channel .

. instead of saying cpu#n(...)
just say co f(x, out), or y`= co f(x) .

8.24: adda/co/syntax:
. there are 2 forms of the function co
according to whether co is synchronous or asynch:
# co(f,g,h) -- co with multiple arguments --
expects spawn and wait .
# co f -- co with just one argument --
just launches f asynchronously .

8.31: co.self/dream/adda/co:
. the dream was something about
"(ways of doing backup for concurrency or ...)
"( ... and now that we can do that,
we can do this ...) [dream recall is too fuzzy].

rationales for go.lang's surprises

8.22: news.adda/go/assertions are no way to panic:
Why does Go not have assertions?
[ assertions allow you to say
crash if this condition doesn't hold true
because something is wrong
and the program will need debugging .]
. assertions are undeniably convenient
to avoid thinking about proper
error handling and reporting;
but, servers should not crash from non-fatal errors;
and, errors should be direct and to the point,
saving the programmer from interpreting a
large crash trace .
Precise errors are particularly important
when the programmer seeing the errors
is not familiar with the code.
no exceptions? well, panic is just as useful:
. func panic(interface{})
func recover() interface{} .
. these built-in functions assist in reporting and handling
run-time panics and program-defined error conditions.
. When a function F calls panic,
normal execution of F stops immediately.
Any functions whose execution was
deferred by the invocation of F
are run in the usual way,
and then F returns to its caller.
To the caller,
F then behaves like a call to panic,
terminating its own execution
and running deferred functions.
This continues until all functions in the goroutine
have ceased execution, in reverse order.
At that point, the program is terminated
and the error condition is reported,
including the value of the argument to panic.
This termination sequence is called panicking.
panic(42); panic("unreachable");
panic(Error("cannot parse")).
. The recover function allows a program to
manage behavior of a panicking goroutine.
Executing a recover call inside a deferred function
stops the panicking sequence by
restoring normal execution,
and retrieves the error value passed to the call of panic.
If recover is called outside the deferred function
it will not stop a panicking sequence.
In this case, or when the goroutine is not panicking,
recover returns nil.

go vs python from the trenches 2012

8.22: news.adda/go vs python from the trenches 2012:
Graham King:
Would I be happy working with Go as my
main language? Yes, I would.
It’s a joy to work with,
and I got productive very fast.
Am I using it instead of Python
for all my new projects?
No, There are two reasons for that.
Firstly, it’s a very young language,
so library availability is limited
(for example, I need curses).
Second, well, Python is just so amazing.
[but] I see the benefit of static typing.

If I was still doing Java, or (heaven forbid) C++,
I would invest heavily in Go.
It was designed to replace them, and it does that well.
Go’s sweet spot is building servers.
(it makes concurrency safer and easier)

Other claimed benefits of Go over Python
are that it’s faster, and it’s “better at scale”.
For some things I’ve done
Python has been faster .
[and] The biggest difference [in speed]
is probably [just] in how well I write the code.
[also] nothing I do is CPU bound.
The “better at scale” argument doesn’t really apply
to building web apps with Django.
We scale by adding servers,
and [Django supports programming-in-the-large
with small self-contained ‘apps’ ]
Steven (March 6, 2012 at 23:22)
You can get pretty close to a REPL
[an interactive Read-Eval-Print Loop]
with goplay, [a web-based go compiler]
-- instead of interpreting, it compiles
[ go was designed for very fast compiling .]
Go is possible to daemonize.
You could use a sync.WaitGroup
to make main wait for any number of goroutines to exit .
But more directly, you can do the same thing by
adding this to the top of your program:
defer runtime.Goexit()
write CPython extensions with Go:
. goPy with gccgo .
Once the libraries and command-line tool are installed,
the "gopy" command-line tool
is generating the necessary C interface code;
and, then using gccgo will compile the code
into an extension module.

walk and chew gum #Python

8.22: news.adda/python/co/walk and chew gum:
Google discourages Python for new projects?
[they encourage go.lang;
Python is said to need more resources?
I'm sure the problem is backwards compatibility .
. in the paper where they talked about Unladen Swallow
they mentioned wanting to remove the GIL
(the global interpreter lock for thread safety)
and this really surprised me coming from google staff,
because Python's project lead (another google staff)
has argued convincingly
that due to the language itself, [11.17:
ie, by being designed for interfacing C
(Jython and IronPython have no GIL)]
removing the GIL from CPython was impractical .
[11.17: at PyCon 2012, he adds:
Threading is for parallel IO.
Multiprocessing is for parallel computation.
The GIL does not hinder any of that.
... just because process creation in Windows
used to be slow as a dog, ...]
. Python has that old-style OOP,
which doesn't do much for encapsulation,
and therefore doesn't do much for thread safety
except to single-thread the interpreter .
. if you want something much like python
that also has good concurrency,
then google has given it to us in go.lang;
but, perhaps what he meant to say
is that it's like Intel's CISC architecture:
that was fundamentally slower than RISC;
but they virtualized it:
the machine instructions are converted
to RISC microcode .
. that's what big money could do for Python:
automatically find the inherent concurrency
and translate it to a threaded virtual machine .[11.17:
... but still interface flawlessly with
all C code on all platforms?
I'm no longer confident about that .]

[11.17: web:
. to overcome GIL limitation,
the parallel python SMP module;
runs python code in parallel
on both multicores and multiprocessors .
Doug Hellmann 2007 reviews it:
. you need install your code only once:
the code and data are both auto'distributed
from the central server to the worker nodes .
Jobs are started asynchronously,
and run in parallel on an available node.
The callable object that is
returned when the job is submitted
blocks until the response is ready,
so response sets can be computed
asynchronously, then merged synchronously.
Load distribution is transparent,
making it excellent for clustered environments.
Whereas Parallel Python is designed around
a “push” style distribution model,
the Processing package is set up to
create producer/consumer-style systems
where worker processes pull jobs from a queue .
Since the Processing package is almost a
drop-in replacement for the
standard library’s threading module,
many of your existing multi-threaded applications
can be converted to use processes
simply by changing a few import statements .
. see wiki.python.org/moin/Concurrency/
for the latest on Pythonic concurrency .]

8.22: news:

allowed characters in identifiers

8.22: adda/lexicon/allowed characters in identifiers:
. if the allowed characters in an identifier
could include the dot,
then there could be confusion in places,
as to whether the dot was declaring a var:
eg, what if you have the name x.F.int,
and then -- forgetting you did that --
decided later to define F.type ?
. now x.F is ambiguous:
are you declaring x to be F.type?
or are you referring to x.F.int ?
. therefore, the allowed identifiers are
first character is a letter
or the character (_);
and, subsequent characters are alphanums
or the characters (') (_) (-);
anthing else gets put in a box:
eg, [x.F].int; .

class var's declaration syntax

adda/type/class var's declaration syntax:
. how does the class var get declared?
how about just like it's called?
. if it's for the instance then we're
writing definitions for self implicitely,
but if there's a class var,
we write as if the var's name is "(type) ...
. just as self's public vars are given as
just one tuple definition,
so too the public class vars are like so:
.< ... , type.(x.t), ... > -- a single public class var;
.< ..., .( x,y: t), ... > -- multiple public ivars;
. we could have also written that as:
.< ..., self.( x,y: t), ... >
. parentheses are required in a tuple syntax
even for the singleton? sure:
why have special cases confusing the reader?
in all cases of dotted naming, we are using a tuple,
which in all cases is represented by a parenthetical .
. how does this seem consistent
when generally x = (x) elsewhere?
it's consistent with other typedef syntax;
eg, in f().t the parentheses are required
in order to tell us that this is a function;
ie, any time we see f, we'll be expecting that
any next non-terminal found after f,
will be f's argument;
conversely, without the explicit paren's,
we'd expect f was not accepting args .

arrays are records are tuples

8.19: adda/dstr/array and record syntax as modes of access:
if ( x.y ) and ( x#y ) are both valid
regardless of whether x is defined as an array or record;
this ignores an idiom where
arrays represent lists of items,
while records represent items with named parts;
. well, my way lets you do it both ways,
but it should be admitted that the writer's freedom
is inevitably the reader's headache .

adda/dstr/array vs record precedence:
. if you have x#y.z
does it mean x#(y.z) or x#(y).z ?
my first impression is that y.z is the tightest binding;
ie, it means x#(y.z);
also, (x#(y) = x#y) only when y is an atom,
and you could argue that y.z is not an atom:
it's a sort of address-addition operation; [11.16:
but, of course, that would be a stretch!
it's more intuitively seen as a style of atom naming .
. finally, consider how functions and arrays
should be similar;
( f x.y ) should be seen as a variant of
( a# x.y ); therefore, the parsing is ( a#(x.y) ). ]
. x.y#z is unambiguous: x has component y,
which is an array taking parameter z;
ie, it's parsed as ( (x.y)#z ).

8.20: adda/dstr/arrays are a species of record.type:
. an array is generating a list of component names,
and then declaring them all to be the same type .
. records are a generalization of this,
where a generated list of components
can have a variety of types .
. in fact, fully-oop arrays are actually records;
because, they often have a variety of types .
. arrays are parameterized types,
and records can be parameterized too
. here's a syntax for allowing a record
to describe parts of itself the way arrays do:
reca(n.int).type: ( a.t1, b.t2, #(0..n).t3 );
-- now x = (a:..., b:..., 0:..., 1:..., 2:..., 3:... ).


Ada's core attributes

8.20: web.adda/ada/predefined attributes:
. what are Ada's reserved attributes?
here they are renamed for clarification
(these are just the most basic ones;
there are many more for floats, etc):
type'base  -- the unconstrained subtype;
enum in type'first .. type'last,
A'range(N) =  A'first(n) .. A'last(n) -- array's nth dimension;
type'value(enumcode number or image string);
type'image(value) is a string;
type'code(value) -- the integer representing the value (aka pos);
type'codebitsize -- obj's fixed size (when packed);
type'imagesize -- #characters(for largest expression);
type'++(value) -- successor;
type'--(value) -- predecessor .

model-view and composite tech

[thought I blogged this 9.2, but found as draft;
then found a note that the reason it was draft
was a concern that the body was still too buggy;
nevertheless, the first part is still interesting .]

7.17: web.adda/architecture/MVC vs MVVM:
8.31: summary:
. in trying to find an explanation of
the essential differences between
all the variants of the MVC architecture,
it was most helpful to see Alexy Shelest's
2009`MVC vs MVP and MVVM:
first he reminds us of MVC's original definition:
"( the “Gang of Four” book
doesn't refer to MVC as a design pattern
but as a “set of classes to build a user interface”
that uses design patterns such as
Observer, Strategy, and Composite.
It also uses Factory Method and Decorator,
but the main MVC relationship is defined by
the Observer and Strategy patterns. ).
. then, Shelest had an interesting summary of
the historical evolution of MV-architectures, eg:
"( Potel questioned the need for the MVC's Controller;
He noticed that modern user interfaces
already provide most of the Controller functionality
within the View class,
and therefore the Controller seems a bit redundant.).
. after reading that, I wrote this:
. I didn't get mvc either, and promptly redefined it:
subrograms should not be controlling a gui interface,
rather they should be designed primarily for
use by other subprograms,
just as unix tools are meant to be .
(well, unix takes a shortcuts by
making its tool language a character string
that could also be readable by humans
but that was a security blunder because of parsing errors
that confuse critical datatypes like
{filenames, command names}).
. so, anyway, back to how humans fit in:
in order for a human to interact with
a subprogram that speaks type-tagged binary,
the human needs an agent who will
convert this robot speak into a gui .
. this agent is the Controller of the Model subprogram,
and it creates a View the human can interact with .]

benefits of unique context operator

8.19: adda/lexicon/benefits of unique context operator:
[8.20: intro
. in 2012.4, this "(::) was presented as a context operator
but it didn't give a specific reason:
"( confusing having syntax"(type`value)
when (`) already has a specific meaning: (x`f); ). ]
. even more than confusing,
it's name`space limiting:
if you have the syntax ( type`attribute ),
along with ( type`method ),
then the type authors are limited in
what they can name their methods
because it could clash with type attributes; [8.20:
eg, for enums there is an attribute named first;
eg, for bible.type: {last, first},
bible`first = last; -- this type's first value is "(last);
but if the context operation uses the same symbol,
then ( bible`first ) could mean either
the first value of the bible enumeration,
or the bible value named "(first).
. by having a separate context operator (eg, ::),
we can say ( bible`first = bible::last ).]

8.19: 11.15 .. 16: review the syntax:

obj.type(select a variant of this type)
-- declares obj to be of this type;
obj`= value -- object initialization;
type::value -- fully-qualified enumeration value;

obj.component -- public ivar;
type::.component -- public ivar's default initial value;
obj#(component expression) -- public ivar;
type::#(expr) -- public ivar's default initial value;

type.component -- public class var;
type#(component expression) -- public class var;

obj`body/local -- private ivars;
type`body/local -- private class vars;

obj`message(...);  -- instance message call;
obj`message`body -- instance message's body;
type::`message`body -- instance message's body;
type::`message -- instance message uninstantiated
( practically the same as type::`message`body );
type`attribute -- class message call;
function(obj) -- call to function of instance
(may or may not belong to an obj's type's interface);
type::subprogram -- full-qualified subprogram call;

type::function`body -- access function's body;
type`body/subprogram -- private class subprogram call;
obj`body/subprogram -- private instance subprogram call;
obj(expr) -- obj callable? apply('obj, expr);
function obj -- call with this obj

. notice there are separate namespaces for
{ value and function names
, .components
, `messages }; because,
components are found only after a dot,
and messages only after a backquote;
whereas, the namespace for value
is shared by that for functions;
otherwise, the parser would have problems:
both values and functions start with a name
but only functions expect the next lexel
to be the argument of that function:
# function x -> apply function to x;
# value x -> syntax error
( unless x is a binary operator
or value is numeric and x is a dimension ).
. therefore, for each unqualified name
it must be typed unambiguously as
either a function or a value, not both .

11.15: mis:
"(review the syntax:
type::value -- fully-qualified instance value
type::`message -- access message's body
type::function(...) -- access function's body
obj`body/local -- private ivars;
type`body/local -- private class vars;
type`attribute -- class message call;
) . sometimes it is using (type::x) to mean eval x,
but other times don't eval?
and then this:
type`body/local -- private class vars;
. the body of the function is within
the body of the type:
and the way to refer to the function uneval'd,
is to ask for the function's body:
type`body/function`body .
. but if the function is also visible from the interface
we could also write:
type::function`body .

11.16: clarification:
"( review the syntax:
type.component -- public class var;
obj.component -- public ivar;
type::.component -- public ivar's default initial value
) . an interface definition has syntax for
both class and instance public vars,
and these are accessed similarly,
being dotted with their respective objects:
obj.component -- public instance var;
type.component -- public class var .
. if you hadn't defined an instance yet,
and still wanted to refer to an instance's component,
that would be done with the type's context operator:
and, since there was no instance involved,
the only meaning it could logically have
is being the component's default initial value .


task.type's syntax and semantics

8.16: syntax:
. when an object is given a message,
it is the object's type mgt that applies a method;
and, these type mgr's are tasks (co.programs);
but how do we get the obj's themselves
to be tasks? (ie, running code on a separate thread).


read-only and write-only params

8.14: adda/cstr/function/syntax/read-only and write-only params:
 if a parameter gets a pointer, (eg, /.int)
then it can be modifying both the pointer and the int,
so, shouldn't we have a syntax for expressing
what the algorithm intends to do with both?
. so, we still need a syntax for what is read-only .
. if a parameter expects a read-only,
then we can give it a modifiable
while expecting that it won't modify it .
. how about syntax for write-only?
maybe we should use (.typemark^) for that?
. the latest idea obviates a read-only syntax:
(inout param's)`f(x) -- in that example,
the x param is in-mode only (read-only)
that includes both the pointer and its target .
. notice that if the input is shared by any co.programs,
then we need to lock it as read-only or copy it,
unless we expect it to rely on the interactive value .]

API-GUI equivalence

8.7: adda/api-gui equivalence:
. every aspect of a subprogram's gui
should be mapping to some feature of
the subprogram's interface (API);
so how is the API specifying
an array of menus with submenus?
sometimes there is menu-izing naturally formed by
an app inheriting from a service type,
like the file menu is,
for apps that use the file system .
. a datatype's operations are going under the Edit.menu;
because, that's the general term for the current datatype .
. a View.menu would belong to the human's agent
that was providing various ways to
format the display of data; [11.11:
but, a subprogram's API might have multiple views too .]


signaling in gui systems

8.12: adda/cstr/signals/signaling in gui systems:
how is the gui related to the signal?
. as a summary of 2012/07/gui-notification-systems
I was thinking gui's should be impl'd with
gui-optimized signals,
rather than the general signal system,
but then I wondered if that idea was
keeping mvc in mind,
so that generally all IPC (interprocess communications)
would be seeing the same windows that a
human can see with the help of a gui agent .


substitution principle

8.21: adda/oop/type/substitution principle:
. when a parameter is type rectangle,
then it can be assigned any rectangle
including a square;
because, a square is rectangular .
. classic oop has had a problem with that,
so, how do we sort out our subtypes
from this jumble of subclass confusion?
. the solution I've come up with
is known as Changing The Language:
ie, rather than assume a type.tag is constant;
we assert that related types
are expected to have a common surtype;
eg, for a var constrained to Number.type,
it will have a constant surtype.tag;
but, it will also have a modifiable subtype.tag,
that can vary within {N,Z,Q,R,C}
(unsigned, int, quotient, real, complex).
. a var constraint can be a subtype too:
eg, for a var constrained to Real,
it can only vary in {N,Z,Q,R};
ie, the subtypes whose domains are
subsets of Real's domain .
. and, as usual for non-oop,
if a var is constrained to a definite type, such as float32,
then both the surtype and subtype
are going to be constant
(hence the obj will be untagged);
but, since the surtype is Number,
and float32 is understood to be a subtype of Real,
you can ask for (float32 + int)
and multi-dispatching will still work .]
. in order to be a subtype of rectangle,
an object that is tagged as being a square
has got to be re-taggable as non-square
in the event that the rectangle interface
asks it to modify itself;
ie, a type that inherits from rectangle
and then adds the constraint
that it be a square too -- forever --
has decided to nix support for type compatability,
which contradicts the idea of inheritance .
. to be a subtype (ie, support type compatability)
the inheritance needs to work like this:
my subtype named square
has the properties of the inherited rectangle
but includes the constraint width = height .
. if you ask me to break that constraint,
then I'll re-type.tag myself as a rectangle
(rectangular but not square).
[11.9: corrollary:
. if a var is constrained to be a square,
then the only rectangle it can be assigned to
is an unmodifiable one;
being assigned to a modifiable rectangle,
could potentially violate the type constraint,
and should result in a compile-time warning .
. this isn't a fatal error though;
because, the user could expect that
squares will grow in only a square way,
-- as would be the case for doublemy(rectangle) --
or the user may want to handle the exception
by trying another algorithm;
therefore, the warning should remind the users,
that they appear to be depending on
either having exceptions handle
the type.constraint.error,
or having the rectangle operation
not violate a square's subtype.constraint .]


obj'c categories and Ada's hierarchical pkg

8.14: adda/oop/obj'c categories and ada hierarchical libs:
2012-01-31 Objective-C Categories
1.20: adda/type/obj'c categories:
. Apple is using the obj'c Category for MVC separation;*
eg, string has many uses in a command-line interface,
so it exists in the core package without any methods for
drawing on a gui;
categories are then simply extending that string type,
instead of having sublclasses reuse that string type
in yet another type;
so just one type can exist
yet with  relevant parts in different packages .
* see { Buck, Yacktman}`Cocoa Design Patterns

. isn't that use of categories needed only because
the designers were assuming certain requirements
that are demanded only by the current oop model?

. if your string wants to express gui-specific tricks
such as appearing in a particular font,
or being arranged so as to follow a curve,
that need should be served by the use of a drawing record
which then has a string as one of it's parts .
(ie, it's ok to have 2 different classes !)
. a main point of the book"design patterns

was to critique oop's use of subclassing;
and, that criticism might apply equally well
to this use of categories;
but, generally, categories do have a good purpose:
they allow separate compilation of class extensions
without having to recompile the original interface
which might then require a recompile of
all the clients of that interface .

. this reminds of Ada's hierarchical libraries,
in Ada you can reuse oldlib's module
with the syntax:
package oldlib.additionalmethods
(by including oldlib's name like that
within your new package's name,
your new package includes the binaries of oldlib ).
. now current clients of oldlib
still don't have additional methods,
but, future clients of oldlib.additionalmethods
will have access to both modules
with that one import .
. obj'c categories by contrast,
allow you to add the same new modules
but this addition will also be affecting
current clients!
-- the category's method has access only to
the target's interface, not its internals;
so, a category can't fix every bug;
yet it can create bugs because it can
override system methods .

. I have 2 competing ideas on this:
# we should be able to describe in adda
any idea of any other language;
# we should not be supporting the use of ideas
that are either complicating or insecure .

here's how the the Category idea might be impl'd:
. when a datataype agrees to be modified by categories;
then at run-time, the type mgt is a modifiable object
and, it provides a dictionary
where you can see if it supports a message .
. it can dynamically add entries to this dictionary
(allowing it to support new messages),
and it can change method addresses
(allowing it to update the methods of old messages).
. now all we need is an uncomplicated way to
decide which types are so modifiable .
. perhaps a type wishing to participate
could declare one of its components to be
a dictionary with a standard name
(perhaps SEL, in honor of Obj'C);
then anytime a message is unrecognized,
the run-time would check SEL for the method .
11.8: correction:
. in order to work like an obj'c category,
it has to check SEL all the time,
not just when a message is unrecognized,
in case one of its methods got overridden .]


PLY(Python-based Lex-Yacc) and pyCparser

8.20: web.adda/dev.py/ply
11.2: PLY compared with ANTLR:
There are several reasons to use ANTLR
over one of the Python parsers
like PLY and PyParsing.
The GUI interface is very nice,
it has a sophisticated understanding of
how to define a grammar,
it's top-down approach means the
generated code is easier to understand,
tree-grammers simplify some otherwise
tedious parts of writing parsers,
and it supports multiple output languages:
its builds parsers best in C, C# and Java;
but also has some support for generating Python .
ANTLR is written in Java;
Unlike the standard yacc/lex combination,
it combines both lexing and parsing rules
into the same specification document,
and adds tree transformation rules
for manipulating the AST.
. so, if you prefer python tools over Java, PLY ahead!


self's id

8.11: adda/oop/self's id:
7.28: intro:
. the expression tree being sent to number's type mgt
contains ptrs to the symbol table nodes;
and, those nodes include type tags .
. the symbol table is constant,
being part of the function's template;
the code points to symbol table nodes,
and then those nodes point to the stack,
or have immediate data if constant .
. how does an object know its own id?
an expression tree sent to a type mgt
has a context (process, act'rec)
that tells it who the caller is,
and all the obj references in the expr'tree
are relative to that context .
. messaging an obj means calling its type mgt,
and then that type mgt is calling the methods
which then have access to the current context
as a global just like in assembly language
where programs have access to a current
stack pointer and program counter .

infratypes of surtypes, phyla of taxa

8.13: adda/oop/rethinking terminology:
. I have terminology for biop (binary operations) oop
arranged as supertypes existing as
one of fixed number of subtypes;
eg, Number is a supertype because it manages the
binary interactions between the following types
which are therefore called subtypes of number:
integer, real, decimal, rational, irrational, complex .
. classic oop hasn't played well with biop's:
being objected-oriented has meant that
(obj+x) had to be expressed as (obj.+(x))
and then obj's type, say int, needed to have
(+)-operations for each possible numeric type
(int, real, rational, irrational, complex);
what we needed was multi-dispatching:
where (obj+ x) gets sent to the supertype,
which finds the right subtype to handle it .

. the notion of subtype
can also include range limits .
. there are naturally limits imposed by
the numeric types that machines provide,
such as what can fit in multiple of bytes;
eg, int8, int16, int32, float32 .
. within my theory of oop,
these might be called infratypes,
for reminding of the term infrastructure .
. infratype is a relative term:
regardless of whether it's a cpu type,
or a type you reused from a library,
an infratype of T.type is any type that has been
reused for implementing T .
. I'm realizing that infratypes
should be supported by the type.tag
(simply because we should support
space-efficient numerics,
and we do that, then we need to know
which infratype we're using in
(8, 16, 32, extendible);
what I've been calling subtypes
(in the biop oop context)
can instead be called ideal infratypes
(in contrast to optimized infratypes);
what I've been calling the supertype
can instead be the opposite of infratype:
that seems to be the word: surtype .
-- also some extra checking just now
shows that the idea I'm trying to convey,
is the same as that found in the words:
{surname, surface} -- the outermost part .]

[10.5: 10.7:
. in contrast to the ideal infratypes
that are designed by mathematicians,
the optimized infratypes are chosen by the
intersection of what types the platform provides
and what is of use by the
constrained versions of the ideal infratypes .

. there is a relationship between
subtypes and infratypes:
. within a polymorphic type,
infratypes are the type's possible forms (morphs);
whereas an value whose type is a subtype of T,
can be used anywhere a T-typed obj is permitted;
but the reverse is not true:
if you have a variable obj whose type is a subtype of T,
it cannot be used anywhere a T-typed obj is permitted;
rather the replacing obj must be able to handle
every value and every operation
that could be handled by the obj' it's replacing .

. a surtype has a finite set of ideal infratypes,
ie, types distinguished by having differences in
either the allowed set of operations,
or the data format of the current state .
. within each ideal infratype,
we then have a possibly infinite number of
additional subtypes
according to constraints on
value attributes, eg, on the value's range,
or the scientific format's number of digits,
or on the number of fraction digits .]


safe pointer details consider concurrency

8.4: adda/dstr/safe pointers/
space-efficiently pointing to both static type info and obj:
. we need to point at the symbol table index
not the obj itself, because,
only the symbol table index has the type tag
in the case when the type is static .
. as for space-efficiently doing this
at first I was worried about the
huge length of full paths
(process#, act'rec#, symbol table index);
but, [8.3:
just like urls have versions,
{fullpath(absolute), filename(relative addressing)},
pointers too should be available as
both long and short versions:
the full path is
(process#, act'rec#, symbol table index)
but if you are in the same act'rec,
you'd use a shorter version of the pointer:]

. if you are in the same activation record,
then the pointer is a mere 16bits:
(we assume here that a pointer needs a 3bit type tag,
and that the max# of symbols is 2^13 .

[8.21: 3bits?
. locally we need no tag at all,
because we just assume everthing is local;
otherwise, if it's a general pointer,
it will have the general type.tag arrangement
which usually means a static supertype,
and a dynamic on-board subtype,
which in the case of pointer needs 2 bits
for these 4 cases:
( internet address,
, local to system
, local to process
, local to act'rec ) .]

8.26: processID irrelevant:
. do we need to track the processID?
. any obj' we'd want to point at
is located within some act'rec;
so, isn't the act'recID enough?
8.28: 8.26:
. the act'rec obj' could have a subtype.tag
so that the same act'rec supertype
could represent all sorts of objects
like subprograms, coprograms, type mgt obj's, etc .
. the main place to for finding out
how the current act'rec is related to a process
is by asking the task mgt who is
using this act'rec ID as a node index
into a tree of processes and their act'recs .

8.28: processID irrelevant despite pointers:
. when using pointers, the process ID matters;
because, we can give a pointer for assignments?
but that's not the problem of our pointer:
it has only to locate a target,
and it's up to exec to do the right thing .
. any time an assignment takes place
the target must be either a global obj,
(these are essentially act'recs because
they're made thread-safe by msg-taking functions)
or the owner had to have ordered the assignment
(that means either the owner is suspended for a subroutine
or the target is locked for a promised assign by an async).

8.26: processID fundamental?:
. we'd like to keep pointer sizes small,
so if we could know that an act'rec would
stay local to a particular process,
then we could possibly divide the number of act'recs
by the number of processes .
. we could say that each process has its own heap;
and, just as each act'rec has few symbol nodes,
a process has few act'recs,
so this is another form of subheap
the process subheap vs an actrec subheap .
. unfortunately, given the use of recursion,
the number of act'recs per process can still be
arbitrarily large, so this would do nothing for
the size of an act'rec ID .
. the situation where process is fundamental
would be in systems with unprotected vars;
because, then process is a unit of thread safety:
anything can access any var within the process,
and then the process has only one thread .
. what we could have instead,
is a system where encapsulation and locks
are the basis of thread-safe computing:
. anything accessing a var must instead
ask that var's type mgt to do the accessing,
and then the type mgt has only one thread .
. a var lock gives sole ownership of a var
to another thread .
. in that sort of system,
a process is nothing more than an act'rec
that is run concurrently vs sequentially .


love the cloud but use nothing else?

9.26: cyb/net/love the cloud but use nothing else?
(response to:
Tell us what it would take for you to
use "nothing but the web"
-- googleappsdeveloper.blogspot.com/2011
. I was one of those palmtop enthusiasts
who everyone else would just shake their head at
until smartphones came along .
. what I loved about my little Palm OS device
was that it let me do my laptop essentials:
type-in, search, and review my daily notes,
along with calender and reminder automation .
. when I moved beyond the Palm OS,
up to a full-featured file system
my palmtop was also an ereader (pdf's, html).
. why would anyone
want these activities to not work
in the event the net was down ?
therefore, I conclude that
using nothing but cloud computing
is just nuts .
. what's even more bizarrely nuts,
is thinking that we can depend on the web
to give us the security we lack from using
a monolithic OS like mac or linux
-- juicy, fruity, nutty nuts .
. your infrastructure needs to start with
a microvisor like okL4,
and then use layers of locality:
. the pim and ereader are onboard,
the animation is in an on-site server
which is being sync'd with internet servers .
. this is the same way layering is seen in
{registers, cache, ram, hard-drive}.
... and it saves a lot of energy!



7.10: news.cyb/dev/I Programmer:
Programming news, projects,
articles and book reviews:
· www.i-programmer.info;
iProgrammer news is written
for programmers by same .
iProgrammer news also doesn't aim to be
the first or up-to-the-minute.
Our news does cover the latest releases,
intentions and developments
but in depth and with a
commentary that aims to explain
the significance of the event.
It also marks significant anniversaries
with links to relevant articles.
9.6: web:
. it has a table of subject-specific pages;
notable subject-specific lists of articles
include theory
(eg, a better way to program
-- see my article about this cp4e heroism .
and security
(eg, 2012.2 Google's $1 million for Chrome Hack
2012.3: Chrome Hacked Twice at CanSecWest)

adde influenced by Bret Victor, Chris Granger

9.8: 9.23: adde/adde influenced by
Bret Victor and Chris Granger:

. Granger's write-up of Light Table
-- and the Victor video that inspired it --
have me wanting something new for adde:

. as it runs your code,
it opens all the files being called by main,
and also opens a simulation of the code
( some of that idea is already embodied
in the adda feature where all types
have an associated graphical image,
and you just have to start debug mode
in order to see the image of
every variable in main .
. once you get some working code,
it is constantly rerunning your test case
with every change to your code .
. so we get not only continuous testing,
but also continuous simulation of the testing;
and, of course,the simulation and the code are linked;
so, selecting one selects the other .

"(In light mode, Light Table lets you
see called functions not just by
highlighting their calls in your code,
but also by showing you their code to the side.
We shouldn't have to constantly
navigate back and forth to see this .
. this works for small files,
but for much larger ones
I would like a combination of this and
Apple's idea of being able to
open and close a file simply by
toggling the space bar .
. in my combination,
the main is opened on the left;
then for each file used by main,
it opens that file only partially,
just to show a thumbnail view of say 5 lines
to show the function's header info:
the signature and the summary string .
. when you click on the thumbnail, it opens fully,
when you arrow down, that full view is
filled with the next thumbnail's file .

. the programmability of this is important:
the layout editor should have a language for
describing not only how windows are arranged,
but also how they are behaving .
. the first time I had that idea
was wondering how to tell the editor
that we need 3 windows to be columns,
such that text being added
flows first into the first column,
with overflow going to next column, etc .
. we might have other behaviors,
so, how do we let the user design that,
and create a command for it ?

. in a composite doc' we have the choice of
whether regions overlap or not .
. when I open a thumbnail,
it covers other thumbnails,
but it doesn't cover other subwindows
like that of the simulation or the main text .

. an alternative behavior for a subwindow
is that instead of an open file
being allowed to overlap the thumbnails,
it pushes them aside,
and the region becomes scrollable,
so that I can see a stream of open files
mixed in with a stream of thumbnails .

. there are subwindows with dedicated purposes:
the project's folder system, the simulation,
the main, and the subprograms main calls .
. there are a lot of subprogram subwindows
so we can toggle them as all thumbnails,
all open, or opening only the
currently running subprogram .
. if your center of attention is not main
but something main called,
then you want main's region thumbnailed,
just like a stack of activation records:
closing your currently active file
means seeing the stack of thumbnails
that it jumped out from .

Granger's Code with a little illumination:

"( Here I find a bug where I wasn't passing x correctly.
I type (greetings ["chris"]) and immediately see
all the values filled in
not just for the current function
but all the functions that it uses as well. )
--. this is the same idea I just mentioned,
that of showing the activation stack:
we're showing not just the code,
but wherever there was a
variable name in the code,
now there is a named box with a value in it
-- and of course,
every var in the code is instantiated
so there are values at
every mention of that variable .
. variables should be
toggle-able between the image and code:
an array may represent a picture,
or a list of (rgb) color vectors .

. another dimension of toggling
is the outliner feature:
all code and values are {collapsible, expandable}
and you can do this symbolically too:
there are not only commands like close all,
but such things as:
open only what names match (this pattern) .

. watching the Granger video again,
I see he does automatic filing of functions:
you can start off in one editor window,
and then when we start code for a new function,
it buds into a new editor window too;
and -- my idea here --
when you have nested functions
then you should have nested editor windows,
and of course,
they can be resized with scrolling bars,
or made so control-arrows and page{up,down}
can do the scrolling .
. the outliner menu includes
several massive window controls;
eg, we can say:
recently used windows are 25 lines high,
or say:
all windows are 10 lines long until I
focus on it for a full view .
. a primary feature seen from the vid,
is that when the side region is showing the
files that are called by the main region,
they become part of the debugger show:
ie, when you start launching main's test program,
the files to the side turn into instantiations of
the function`bodies that main enters;
so, -- my question here --
what should it do if you have recursions,
and multiple instances of a subprogram?
the 2 obviously possible behaviors are:
# show all instantiations;
# show the current instantiations .
. in the [show current] alternative,
as the new instance is created,
you see the new instance
overwriting the previous instance .
. after the instance is done,
it is re-overwritten by the prior instance .
. in the [show all] alternative,
we need to show a call tree
perhaps expressed as dots for each node,
and then you use [birds eye view]:
drawing a box around the part you want
or using the pinch gesture
on the part of the call tree
that you want to see thumbnails of
(thumbnails are opened in the usual way).
. from the call tree's [nodes as points] view
you can also use arrow keys to
turn nodes into files or thumbnails,
one node at a time:
up= parent node, down= 1st child node;
{right,left}= nearby sibling nodes .
. we can use the finder with opener, eg:
open all the instances of subprogram x .

CP4E hero Bret Victor

9.8: news.adde/light table and bret victor:

. Bret Victor's video has a cp4e mission
and shows off the new tools he's built
that help us develope our creative ideas .

. amazing what he's done;
he has an IDE where you can click on any var,
and it turns into the widget needed to adjust it;
but then it's also
re-running your code after every change;
so, if your program generates a picture,
his scripts are custom GUI's for
painting particular classes of pictures
where you paint by both coding
and by adjusting code parameters
-- all by GUI! (as you do edits,
the page is sprouting widgets to help you).
. when you slide a for-loop var,
the program is re-running for every
value in the for-loop's change range
which is causing an animation effect
-- a new way to experiment with video .

. even more amazing,
this IDE creates a complete mapping
between the subpicture being drawn
and the line of code that drew it;
so clicking in the picture
causes the editor to highlight a line of code,
and conversely, selecting a line of code
highlights the subpictures it drew .

. now, speaking of animation,
the next thing needed for visualization
is to create trails for specific subpictures,
so that as you modify a parameter,
it immediately shows how that is affecting
the path of the trail .
. first he's pausing at the
end of the interesting time interval,
then he's using rewind to get to the
beginning of his time interval of interest;
and finally when he's adjusting parameters now,
for each new parameter change,
it's drawing a new trail specific to the
time interval of interest .

. why are there symbols in a schematic?
because it's the easiest way to pen them,
Victor notices ( and there's the icon effect
-- creating instant recall of the concept );
but now that schematics are in computers,
we should be using their dynamic expression:
so, in an electronics diagram,
we could replace the symbols with
little videos of the analog signals
that are generated at that symbol's node .

. likewise, computers can make coding so easy
that any one can do it;
indeed, it seems that the best coders
are just those who are best at
imagining a computer's working internals,
but we have the computer to do this for us,
so why don't we?!
. for example he shows another IDE,
this one being tuned for coding instead of drawing
in which the editor has 2 areas:
on one side is the code,
and on the other is a simulation of the code;
ie, the simulation has a list of all the locals
that are so far declared in your function,
-- and it does this in real time:
eg, as you write the function's header,
the simulation shows the parameters as undefined;
then you can modify the simulation
by instantiating any undefined variables
(so the assignment happens only in the simulation,
not your code), and then as you have code
assigning a complex expression to a var,
the simulation tries to eval that assignment;
and, this way you are testing as you code
-- every time you hit enter,
it's relaunching the build&run
for per-line-of-code testing (quick!).
. if you add a loop to your code,
then all the var's being affected by the loop
are placed in a matrix
with vars in the columns,
and iterations per row (or vice versa).

. he then gives us a history of people who
had a passion for using computers to
help people with making full use of their brains;
pointing out that this work is important because
people need these dynamic tools
to unleash their full human potential .
. he was especially impressed at
those who made computers child's play;
and reminded the audience's budding techies
that we can choose as our identity
to have a social conscience,
and not just a technical expertise .

pioneers of cp4e:
(computer programming 4 everybody)

# Larry "nomodes.com" Tesler:
. not just the developer of copy&paste
but the one realizing we could do better than
obstructive modal versions of copy&paste:
he recognized a wrong
that was unrecognized by the culture .

# Doug Engelbart -- enabling mankind:
. propent of realtime human-computer interaction
not just the inventor of the mouse:
he wanted to solve mankind's urgent problems with
computer-assisted knowledge workers .

# Alan Kay -- enabling children:
. everything he did for windows, menus, and oop,
was to enable children to be computer literate
to make them more enlightened adults!
. he studied those who studied how children think,
in order to help them use computers .
[. in 2006, Kay's Viewpoints Research Institute
was funded by USA's NSF for the proposal:
Steps Toward the Reinvention of Programming:
A compact and Practical Model of
Personal Computing as a Self-exploratorium [pdf]
(see comments of it:
. they critiqued him for bucking the trend of
taking advantage of cheaper hardware
by filling it with more features and bling;
but ironically, that attitude
still hadn't been able to provide all children
with a computer that would help them learn
-- children's computers need to be
as cheap as calculators;
but instead we insist on continuing to
code above the child's price point .]

# Richard Stallman -- freedomware:
. the king of fighting to change culture
for the good of mankind .

9.8: web: i-programmer.info`
Alex Armstrong 10 March 2012`

A Better Way To Program
This video will change the way
you think about programming.
The argument is clear and impressive
- it suggest that we really are building programs with
one hand tied behind our backs.
After you have watched the video
you will want the tools [that he has] demonstrated.

We often focus on programming languages
and think that we need a
better language to program better.
Bret Victor gave a talk that demonstrated
that this is probably only a
tiny part of the problem.
The key is probably interactivity.
Don't wait for a compile to complete
to see what effect your code has on things
- if you can see it in real time
then programming becomes much easier.
Currently we are programming with
one arm tied behind our backs
because the tools that we use
separate us from what we write and what happens.
Interactivity makes code understandable.
Moving on, the next idea is that
instead of reading code and understanding it,
seeing what the code does is understanding it.
Programmers can only understand their code by
pretending to be computers and running it in their heads.
As this video shows, this is incredibly inefficient
and, as we generally have a computer in front of us,
why not use it to help us understand the code?
if you watch just one video this year
make it this one.
See Light Table - a Realization of a New Way to Code
[ an implementation of Victor's idea .]
[ it was funded by a kickstarter page .]
You can now try Light Table
via the Light Table Playground!

9.8: web: Bret Victor
. Bret Victor's inspiring resume
and his many cool writings .
. Bret Victor`Inventing on Principle [vid]
from CUSEC 2012 (Canadian University
Software Engineering Conference)
-- a three-day event that brings together
undergraduate and post-graduate students
for learning, networking,
and sharing their passion for software.
9.8: web: chris-granger.com`
Light Table --a new IDE concept
Despite the dramatic shift toward
simplification in software interfaces,
the world of development tools
continues to shrink our workspace
with feature after feature
in every release.
Even with all of these things at our disposal,
we're stuck in a world of files
and forced organization
- why are we still looking all over the place
for the things we need when we're coding?
Why is everything just static text?

Bret Victor hinted at the idea that
we can do much better than we are now
- we can provide instant feedback,
we can show you how your changes affect a system.
And I discovered he was right.
 all of this culminates in the ability to see
 how values flow through our entire codebase.
 Here I find a bug where I wasn't passing x correctly.
 I type (greetings ["chris"])
 and immediately see all the values filled in
 not just for the current function
 but all the functions that it uses as well.

Light Table is based on a very simple idea:
we need a real work surface to code on,
not just an editor and a project explorer.
We need to be able to
move things around, keep clutter down,
and bring information to the foreground
in the places we need it most.

Light table is based on
a few guiding principles:

You should never have to look for documentation
Files are not the best representation of code,
just a convenient serialization.
Editors can be anywhere
and show you anything - not just text.
Trying is encouraged
- changes produce instantaneous results
We can shine some light on related bits of code

Docs everywhere
When you're looking at new code
it's extremely valuable to be able to
quickly see documentation left behind by the author.
Normally to do so you'd have to
navigate to the definition of the function,
but lightable ghosts this information in to the side.
Want to know what partial does?
Just put your cursor on top of it.
This makes sure you never have to worry about
forgetting things like argument order ever again.

. we should be able to search all our documentation
in place to quickly see what it is.
Don't remember what was in the noir.core namespace?
It's one ctrl-f away.

This is especially handy for finding
functions you may not even know exist
and seeing their docs right there.
No need to look at some other generated documentation.

Instant feedback
.  to try things out, we can do better
than lispers' REPL - we can do it
in place and instantaneously.
For example we can type in (3 + 4)
and immediately we are shown the result
- no ctrl-enter or anything else.

Light Table takes this idea as far as it can
and doesn't just show you variables to the side,
but actually shows you how the code is filled in.
This lets you see how values flow through
arbitrarily complex groups of functions.

This level of real-time evaluation and visualization
basically creates a real-time debugger,
allowing you to quickly try various inputs
and watch it flow through your code.
There's no faster way to catch bugs
than to watch your program work.

We built drafting tables for a reason
. desktop windows aren't a good abstraction
for what we do as software engineers .
Other engineers have large tables
where they can scatter around
drawings, tools, and other information .
A drafting table is a better abstraction:
We shouldn't need to limit ourselves
to a world where the smallest moveable unit
is a file - our code has much more
complex interactions that we can better see
when we can organize things conceptually.
. We saw an example of this with Code Bubbles,
but why can't we embed a running game ?
Then we can interrogate it,
and have our environment answer them for us.
. an image in the article at this point
shows a desktop where a folder full of files
is represented by a box
whose title is named after the folder
with file names inside the box .
. the right side has boxes of code;
so, for each function you inspect,
it also shows you called function`bodies .)
. this reminds of composite editing,
where the editor holds the entire desktop;
so, it serves as desktop layouts that can be
suspended and resumed as if a vmware .
. desktops and their windows
can be edited the same as
when an editor embeds images in text .
. the first compound docs I saw
were ms`office embedded objects:
you could mix & match Word's text columns,
Excel's spreadsheet windows, and various images .
. browsers have this feature (the embed tag);
but browsers are short on editing abilities,
and editors need to be integrated with the OS;
such that I can open a file,
and it unleashes a desktop of windows
all arranged in the way they were before .
. also, the layout editor needs to
control the border style of its windows;
eg, embedded graphics in text
should often have no border at all .
. a 1st class composite document
has everything we like about vmware,
but instead of windowing OS's
our compositing app is windowing apps .]
Code with a little illumination
There's no reason our tools can't
help us understand how things are organized .
In light mode, Light Table lets you
see what functions are called by
the one you're currently working on,
not just by highlighting ones in your code,
but by also showing you their code to the side.
We shouldn't have to constantly
navigate back and forth to see this .

Finally, all of this culminates
in the ability to see not just how
things I type into a scratch editor evaluate,
but how values flow through our entire codebase.
Here I find a bug where I wasn't passing x correctly.
I type (greetings ["chris"]) and immediately see
all the values filled in
not just for the current function
but all the functions that it uses as well.

What languages will it support?
. The first two are Javascript and Clojure,
.... we hit $300k! Python will be the
third language supported out of the gate.
. additionally,
new languages can happen through plugins.

Will it be open source?
. a firm believer in open source software
and open source technologies.
we can guarantee you that Light Table will be
built on top of the technologies that are
freely available to us today.
As such, we believe it only fair that
the core of Light Table
be open sourced once it is launched.
At some level, this is an experiment in
how open source and business can mix
- it will be educational for us all.

What's a license then?
. In order to download packaged distributions,
you'll need a license. Preliminarily,
we're thinking, for individuals,
licenses will be based on a model of:
"pay as much as you can of
what you believe it is worth".
. This gives everyone access to the tools
to help shape our future,
but also helps us stick around to
continue making the platform awesome.
We think what we build will be worth at least $50,
and so that's what we've used for our rewards.

Is it a standalone app?
. there's an instance of webkit as the UI layer
-- completely an implementation detail.
It will run locally on virtually any platform
and out of the gate will support
the big three (linux/mac/windows).

Can I script/extend it?
. It will be scriptable in Javascript
(and many other languages can be
translated into Javascript).
Ultimately the goal of the platform
is to be a highly extensible work surface
- even the initial core languages
will be written as plugins.
This allows us to build development interfaces
we haven't even imagined yet.

What about key bindings?
. by using the awesome CodeMirror editor,
this is something that is easily adapted.
If you're looking for a way to contribute,
help improve CodeMirror
and its emacs/vim plugins!
CodeMirror is a JavaScript component
that provides a code editor in the browser.
When a mode is available for your language
[c, python, go, js, ...]
it will color your code,
and optionally help with indentation.
. rich programming API and a CSS theming
for customizing CodeMirror
and extending it with new functionality.
. it's used by Mergely which is a
powerful online diff and merge editor
(Browser-based differencing tool)
that highlights changes in text.
It can be embedded within your own Web application
to compare files, text, C, C++, Java,
HTML, XML, CSS, and javascript.]

How can I help in the meantime?
. The better CodeMirror is,
the better all internet editors can be!
Past that, help us spread the word.
The more money we get
the more people I can involve in the project,
the more languages we can support,
and the more powerful the entire platform.
There's tremendous potential
-- we haven't even scratched the surface yet!

About the Developer:
. helped design the future of Visual Studio,
and released numerous open source
libraries and frameworks.
. for Microsoft was the Program Manager for
the C# and VB IDE
-- countless hours behind a one way mirror
learning how people develop things.
Since then steeped in the world of
startups and OSS.
. worked with the guys at ReadyForZero
to build readyforzero.com,
created the Noir web framework,
built the SQL abstraction layer Korma,
and released a host of ClojureScript libraries
to make client side development a breeze
- many of which are now featured in
the canonical books for Clojure.
Even more recently,
built Bret Victor's live game editor
after watching his inspiring
"Inventing on Principle" presentation .
Light Table at news.ycombinator.com:
One thing Light Table could pick up / learn
is the ability to scale as function set grows,
to gain a kind of fractal navigability.
--[. I think he's refering to the call tree idea
aside from the view's detail modes
(nodes-view vs thumbnails vs fullview)]
stcredzero/"I told you so!"
Smalltalkers have been doing [LightTable]
--[the right thing]-- since the 80's
(If only we could have communicated
about this as well as Mr. Granger).
The first and the last points [under Also:]
were satisfied by lightning-fast searches of
"senders" and "implementers" .
1980's smalltalk:
- Smallest unit of code is the function.
- Able to get instant feedback on code changes.
- Multiple editors with just one function in it.
Show code in an "area of concern" not just in a file.
- The coding environment can show also
results, app windows, graphics, other tools.
- Can save the configuration of the above.
- You should never have to look for documentation
- Files are not the best representation of code,
 just a convenient serialization.
- Editors can be anywhere
and show you anything - not just text.
- changes produce instaneous results
- We can shine some light on related bits of code .
. Dan Ingalls, the father of Smalltalk,
has picked up the baton again,
this time using Javascript.
Check out the MIT-licensed Lively Kernel
-- a new approach to Web programming
. the live demo at JSConf was jaw-dropping;
completely in line with the Smalltalk legacy .
It provides a complete platform for Web applications,
including dynamic graphics, network access,
and development tools.
Field (programming environment)
. embraces most (if not all)
of Light Table's principles.
As always, the multi-media programming environments
are miles ahead and nobody knows about them.
Field[by The OpenEnded Group] is amazing.
{Max/MSP, Pd, ...} are a different paradigm altogether,
but have had live editing, documentation a click away, etc
and have been in heavy use for 20+ years.

list of multi-media programming environments?
. the big names are Max/MSP, Pure Data,
vvvv, QuartzComposer, SuperCollider,
ChucK, Processing, openFrameworks,
Cinder, and Field[The OpenEnded Group].
But there are many more smaller projects
such as Lubyk, Overtone, LuaAV,
Faust, Plask, Impromptu and Fluxus.
I also want to plug NoFlo, which is a
'flow-based programming' library for node.js,
which integrates with a visual editor.
the design of design book

gfodor, vdm, nickik:
recommend some reading materials?
Victor's writings
The Design of Design (see part 4)
simplicity’s virtues over easiness’
Alan Kay`Programming and Scaling [vid]
Viewpoints Research Institute
Mindstorms: Children, Computers, And Powerful Ideas
PLATO Learning System [vid]
Mindstorms: Children, Computers, And Powerful Ideas book udacity -- cheap edu
coursera -- cheap edu
khanacadamy -- free videos