2009-12-31

China's coming up in a big way

12.11: news.adds/robotics/china's coming up in a big way:
KurzweilAI.net, Dec. 9, 2009
. just out, includes The Ray Kurzweil Interview,
and The Chinese Singularity
("Chinese culture has little of the West's
subliminal resistance to thinking machines or immortal people
and this cultural difference may manifest itself in the next decades
in subtle ways.")

Pisa's Valdera Polo Sant'Anna School` bionic arm

12.5: news.adds/cyborganics/Pisa's Valdera Polo Sant'Anna School` bionic arm:

Man controls cybernetic hand with thoughts
A brain-controlled bionic hand attached to an amputee's nervous system
via electrodes implanted into the remaining part of his left arm
has been developed by scientists at Pisa's Valdera Polo Sant'Anna School.
The patient was able to experience sensations when grasping and making a fist.
European scientists have successfully built a brain-controlled bionic hand
allowing amputees to feel hand sensations
and manipulate their limb--via the brain--as if it were still there.

Pierpaolo Petruzziello--who lost his arm under the elbow in a car crash several years ago
--has done just that, Italy's University Campus Bio-Medico of Rome announced Wednesday.

The biometric hand was developed at
Pisa's Valdera Polo Sant'Anna School
and surgically attached to Petruzziello's nervous system
via electrodes implanted into the remaining part of his left arm,
meaning the robotic body part was actually like an extension of his body.
After the surgery at the University Campus Bio-Medico of Rome in November 2008,
it took Petruzziello just days to start using the device.
During the LifeHand trial, which lasted a month,
Petruzziello, 26, was able to experience sensations when
grasping, making a fist, ...
The responses from the hand to commands sent from the brain were 95 percent correct,
Paolo Maria Rossini, head of neurology for the project, said Wednesday.
The next step, which is still at least a couple of years away,
is to work out a more long-term experiment that would hopefully lead to
cybernetic arms like the LifeHand as a viable option for amputees.
The EU has spent $3 million and five years on the project so far,

need more review to decompose the system

12.16: pos.addx/meta/need more review to decompose the system:
. what I've been calling addm because it's os-related,
should really be the domain of adda,
since the high-level language should include a portable os .
. as I was blogging addm articles starting back in 0906,
I noticed the naming was not updated for
allignment with the realization that
nearly everything could be done without addm,
so that low-level details were really part of adda
-- only the mechanics of serving bytecode or wordcode
would be the domain of addm .

bytecode cache

12.1: addm/bytecode.cache:
. just as there are high-speed registers for an act'rec's hotspots,
there could be registers for the bytecode segment .
. if the code were in registers,
then the addresses could be absolute instead of
offsets from the reg#pc (program counter) .
. share by swapping data segs .

real sensitivity training

11.10: psy/real sensitivity training:
. I've often been perplexed at attempts by policy
to replace terms like "(the retarded)
with "(the learning challenged);
that change might have less sour memories
than anything like "(retard)
but, it is still pointing at the same class
using a different name;
ie, after a while, they will just have to
change that name too
as soon as "(the learning challenged) learn to
associate that name
with the way staff treats them or points at them .
. one real difference that policy could make
is to refer to people by their effect on others:
eg, the real issue with referring to "(the retarded)
is that the enclosing situation is sensitive to
those who are irrational, violent, protection-needy,
unsociable, or uncooperative;
and, accident-prone, or future-blind, etc .
. these are the important differences
that characterize class we are pointing at,
yet they all traits that
the clients could equally say of staff;
and, these terms of affect
place a healthy emphasis on
where you are
rather than how you got there .


11.15: pol/mongoloids, and nobody even cared:
. recalled being informed that the term "(hispanic)
is like "(mongoloid):
you don't refer to people by where they're from
because that's a baseless discriminant
since you can't draw any relevant conclusions
from just knowing where someone's ancestors were from .
. yes, and now it seems funny using mongoloid as a
euphamism for downs syndrome
since the notable feature of that disorder
is not appearing to be from mongolia
but being prone to irrational or even violent acts .
. actually the ancestors of mongolia were indeed known for
unparalleled ferocious acts of violence .
. now I see the satire of that 1980's song
with the urgent rock beat, and these lyrics:
"(
. mongoloid, he was mongoloid, and nobody even cared!
)
. the whole world is based on so much irrationality
but is backing it up with ferocious acts of violence,
and nobody gives it a second thought !
[ . to give myself as example,
I'm a livid advocate of pro-choice and death-with-dignity
but my real problem is not having more control over
reproductive machinery,
assignment of careprovider responsibilities,
and drugs that make pain worth living .
] .

2009-12-30

addm's save registers

11.29: sci.addm/sharing registers sub's efficiently:

. when an act'rec saves registers
couldn't this be done more space efficient by
having each subprogram bak its own?
ie,
main has act'rec hold all var's
but while in control,
it copies its hot var's to reg's .
. when it calls a sub,
it first updates the stack with the current reg'val's
then calls a sub' that can do the same thing .
. in case of concurrency,
the task frame needs a space to hold reg's
in a place where the scheduler can find them .
. this could be an unwise idea
depending on what the efficiencies are
and the ways for registers to be moved to stack .
. as a routine proceeds, its use of reg's varies;
so there's less code used in the idea of having
a single register_save.rec for each act'rec .
. they use one code to save vector-given regs
to curr's reg'save.rec [as is typical of asm instructions
where each register is represented by a bit
telling the reg'save.instruction which reg's to save .]
. whatever's right,
depends also on how much info is sharable .
. the caller of module cannot know
what the module's reg needs are,
so it must assume worst-case:
all used regs must be saved .
. in a local subsystem,
the sub's function type can indicate which reg's its using,
this is a good idea in a modular system .
. perhaps there can be codes for
2 styles of calls in {modules, subordinates} .

terminology

11.30: web.adds/engl/a word for study of sensible terminology:

. instead of the word "(engl),
I was hoping for a replacement that meant
study and design of sensible terminology .

Etymology
. the derivation of a word.
2. an account of the history of a particular word or element of a word.
3. the study of historical linguistic change,
esp. as manifested in individual words.
Origin:

1398, from Gk. etymologia, from etymon "true sense"
(neut. of etymos "true," related to eteos "true")
+ logos "word." In classical times, of meanings; later, of histories.
Latinized by Cicero as veriloquium.
1350Ð1400; ME: L etymologia: Gk etymolog’a,
equiv. to etymol—g(os) studying the true meanings and values of words
(Žtymo(s) true (see etymon ) + l—gos word, reason) + -ia -y 3


lexis:
1955Ð60; : Gk lŽxis speech, diction, word, text,
equiv. to lŽg(ein) to speak, recount
(akin to l—gos account, word, L legere to read;
see logos, lection ) + -sis -sis
the vocabulary of a language, as distinct from its grammar;
the total stock of words and idiomatic combinations of them in a language;
lexicon.

lexigram:
a symbol that represents a word
lexicology:
The branch of linguistics that deals with the lexical component of language.
study of the formation, meaning, and use of words
and of idiomatic combinations of words.

lexicography:
1. the writing, editing, or compiling of dictionaries.
2. the principles and procedures involved in
writing, editing, or compiling dictionaries.

lexicon (plural: lexica):
a wordbook or dictionary, esp. of Greek, Latin, or Hebrew.
2. the vocabulary of a particular language, field, social class, person, etc.
3. inventory or record
4. Linguistics.
a. the total inventory of morphemes in a given language.
b. the inventory of base morphemes plus their combinations with derivational morphemes.

lexical:
1. of or pertaining to the words or vocabulary of a language,
esp. as distinguished from its grammatical and syntactical aspects.
2. of, pertaining to, or of the nature of a lexicon.

lexeme:
n. The fundamental unit of the lexicon of a language.
Find, finds, found, and finding are forms of the English lexeme find.

center for economic literacy

11.24: web.adds/center for economic literacy:
. Powell Center for Economic Literacy is a resource for teachers and students
with lesson plans, and curricula for economics,
and information on learning opportunities in the field of economics.
for aspect-inversion: integrating economics lessons into other fields .

hydrogen economy is here!

Making the entire cell using a roll-to-roll process
gives the company an advantage over other
thin-film photovoltaic companies that print on glass,
which is heavier and limited to smaller areas,
says Solexant CEO Damoder Reddy.
"The cost benefit is dramatic,
allowing us to produce cells for 50 cents a watt," he says.
First Solar, a thin-film company that uses vacuum deposition
to print its cells onto glass,
has manufacturing costs of 85 cents per watt.
Nanosolar, another company making nanocrystal solar cells,
uses a different semiconductor that requires chemical reactions
to take place during printing,
which increases the complexity and expense of the process.
"We print a preformed semiconductor,"
which eliminates such steps, says Reddy.
The company's first product, which Reddy says
will sell for $1 per watt next year,
will contain a single layer of the nanocrystals.
The company is currently developing other types of nanocrystals
that are more responsive to different bands of the solar spectrum
in the hopes of boosting its cells' efficiency.
"Ultimately we want to make a multilayer, broad-spectrum cell,"
says Reddy.
. For a truly disruptive project, check out the open-source framework

11.23: web.phy/solar/water to hydrogen was prev'ly requiring platinum:

. what was the big deal about getting {oxygen, hydrogen} out of electricity;
I saw that in grade school
-- what I didn't know was that they were
only able due to the presence of a very rare
and expensive catalyst: platinum .
. now the hydrogen just got much cheaper !
. heard about the latest update on scifri radio:

Daniel Nocera
The Henry Dreyfus Professor of Energy and Professor of Chemistry
Massachusetts Institute of Technology

. a report by an international expert on solar energy
. Daniel Nocera describes a long-awaited, inexpensive method for solar energy storage
that could help power homes and plug-in cars in the future

. Scientists have long known how to split water into hydrogen and oxygen,
but unlike plants, they've needed high temperatures and harsh solutions,
or rare and expensive catalysts like platinum.
[without the platinum, the loose hydrogens just stay unbound,
increasing the acidity, and complicating containment]
Nocera's catalyst is the first made from
cheap, abundant materials (cobalt and phosphate)
that works in benign conditions: a glass of water at room temperature.
The crucial insight that makes this possible came to Nocera in 2004,
when biologists figured out plants' water-splitting machine.
They learned that the machine falls apart regularly,
requiring the leaf to rebuild it from scratch.
They too could let the catalyst break down,
then use a small amount of solar energy to reconstruct it, again and again.

a paper describing the work in the July 31 issue of Science.
"This is just the beginning," said Nocera,
principal investigator for the Solar Revolution Project
funded by the Chesonis Family Foundation
and co-Director of the Eni-MIT Solar Frontiers Center.
"The scientific community is really going to run with this."

Nocera and Matthew Kanan, a postdoctoral fellow in Nocera's lab,
When electricity runs through the electrode,
the cobalt and phosphate form a thin film on the electrode,
and oxygen gas is produced.
Combined with another catalyst, such as platinum,
that can produce hydrogen gas from water,
the system can duplicate the water splitting reaction
that occurs during photosynthesis.
The new catalyst works at room temperature, in neutral pH water,
and it's easy to set up, Nocera said.

Nocera hopes that within 10 years,
homeowners will be able to power their homes in daylight through photovoltaic cells,
while using excess solar energy to produce hydrogen and oxygen
to power their own household fuel cell.
Electricity-by-wire from a central source could be a thing of the past.

This project was funded by the National Science Foundation
and by the Chesonis Family Foundation .

. announced a $10 million grant
to develop technology to make solar power mainstream.
The Chesonis Foundation donated the money for research in three areas:
materials to improve conversion of light to electricity;
storage;
and hydrogen production from solar energy and water.
Called the Solar Revolution Project,
it will provide funding for 30 five-year fellowships in solar energy.
The idea is to pursue "blue sky" research,
in an effort to fill the void between corporate-funded applied research
and the limited amount of federal money dedicated to basic science research in solar,
said Ernest Moniz, the director of the MIT Energy Initiative.
"There are some really hard problems that need to be solved
for the really big breakthroughs to come," Moniz said.
"The underlying science of photosythesis
is extremely complicated and not well understood at the electronic level."
As part of the campus-wide MIT Energy Initiative,
the university already has other ongoing solar-related research initiatives,
including the recently announced
MIT-Fraunhofer Center for Sustainable Energy Systems.
The Solar Revolution Project funding is meant to be flexible to allow researchers to
pursue breakthrough technologies.
A solar leadership council will be formed to coordinate activities among
different research efforts at MIT, Moniz said.
The Chesonis grant will also help fund a
solar energy research report,
modeled on the university's influential reports on nuclear and coal.
Although the power is free, solar electric panels are relatively expensive
because of the large up-front cost.
Solar power is small fraction of the overall electricity production in the U.S.
--just half of one percent in 2007,
according to the U.S. Energy Information Administration.
. to bring costs down, Researchers and solar companies are trying to develop
large-scale manufacturing technologies and higher solar cell efficiency
The Chesonis Family Foundation was founded by Arunas Chesonis,
an MIT graduate who is CEO of telecom company Paetec Holding.

Even though it is only the size of a postage stamp
-- compared to the usual solar collector area that spans 4 x 4 feet --
the cell is much more efficient in collecting and reusing solar energy.
The lens focuses incoming sunlight onto the solar cell.
Microchannels at the base of the module
transfer energy in the form of heat and light to wires contained inside.
Each vertical stack of lenses rolls and tilts like a track blind,
keeping the surface of the lenses faced to incoming sunlight
as the sun changes position in the sky throughout the day.
Incorporating these new cells into arrays
could make solar energy an option that is competitive with other energy sources .
12.18: news.gear/could Double Solar efficiency:
Technology Review Dec. 18, 2009`Hot Electrons Could Double Solar Power
. Boston College researchers have built solar cells
that get a power boost from the blue light portion of sunlight
(whose energy is normally lost in heat).
The solar cells could, in theory,
result in efficiencies as high as 67 percent of the energy in sunlight,
compared to 35 percent with ordinary solar cells...


cyborganics

11.21: news.adds/cyborganics/Chips in brains will control computers by 2020:
By the year 2020, you won't need a keyboard and mouse to control your computer,
say Intel Corp. researchers,
who are close to gaining the ability to build brain sensing technology
into a headset that culd be used to manipulate a computer,
working with associates at Carnegie Mellon University and the University of Pittsburgh.
Their next step is development of a tiny, far less cumbersome sensor
that could be implanted inside the brain.

refuting the big bang

11.21: news.phy/Mystery 'dark flow' extends towards edge of universe
Up to 1000 galaxy clusters have found to be
streaming at up to 1000 kilometers per second
towards one particular part of the cosmos,
a possible sign that other universes are out there....
. that reminds me of the book refuting the big bang:
he pointed out that structures are part of superstructures,
so then given this and seeing things attracting each other,
it appears they are part of structure like galaxies,
only galaxy clusters are their units instead of solar systems .

cloud computing

11.21: news.addx/security/cloud computing:
. If a full cryptographic solution is far-off,
what would a near-term solution look like?
WD: A practical solution will have several properties.
It will require an overall improvement in computer security.
Much of this would result from
care on the part of cloud computing providers
--choosing more secure operating systems such as Open BSD and Solaris--
and keeping those systems carefully configured.
A security-conscious computing services provider
would provision each user with
its own processors, caches, and memory at any given moment
and would clean house between users,
reloading the operating system and zeroing all memory.

A serious potential danger will be any laws intended to
guarantee the ability of law enforcement
to monitor computations that they suspect of supporting criminal activity.
Back doors of this sort complicate security arrangements
with two devastating consequences.
Complexity is the enemy of security.
Once Trojan horses are constructed,
one can never be sure by whom they will be used.
1439:
. afraid of gmail? the dod's internet eavesdropping machine
will pick it up anyway;
even crypts will be cracked in a couple years,
and when they are,
they give some insight into who we are,
which in turn may signal
who needs closer survailence via other means .

literateprograms.org

11.5: news.addx/literateprograms.org:
. if you like eiffel add some code to en.literateprograms.org?
in fact, there is not much eiffel code:
but there is plenty of c code !!!
and java:
. the clean way to find what is getting attention
. interesting to find out that bignum math is best done by fft.

booking robotics

11.30: todo.adde/lib/pdf's on knowledge representation:

web.adde/wordnet:

WordNet is an online lexical reference system.
Word forms in WordNet are represented in their familiar orthography;
word meanings are represented by synonym sets (synsets)
- lists of synonymous word forms that are interchangeable in some context.
Two kinds of relations are recognized: lexical and semantic.
Lexical relations hold between word forms;
semantic relations hold between word meanings.
To learn more about WordNet, the book
containing an updated version of "Five Papers on WordNet"
and additional papers by WordNet users .
Several "standoff" files provide further semantic information

* The Morphosemantic Database
(Semantic relations between morphologically related nouns and verbs)
* The Teleological Database
(an encoding of typical activity for which artifact was intended)
* "Core" WordNet
A semi-automatically compiled list of 5000 "core" word senses in WordNet
(approximately the 5000 most frequently used word senses,
followed by some manual filtering and adjustment).
* Logical Forms for Glosses (Core WordNet Nouns)
Logical forms for the glosses of the ~2800 noun senses in core WordNet, in plain text format, using eventuality notation.
* Logical Forms for Glosses (All WordNet)
Logical forms for most of the glosses in WordNet 3.0
(except where generation failed), in XML format, using eventuality notation.

Texai is an chatbot that intelligently seeks to
acquire knowledge and friendly behaviors.
Important components include the RDF Entity Manager, the Texai Lexicon,
and Incremental Fluid Construction Grammar.
The blog

Cyc is an artificial intelligence project(unix)
that attempts to assemble a comprehensive ontology
and knowledge base of everyday common sense knowledge,
with the goal of enabling AI applications to perform human-like reasoning.
Now that wikipedia and opencyc are linked,[11]
a version of Wikipedia is being developed that enables
browsing the encyclopedia by cyc concepts.[12]

Cyc ontology whose domain is all of human consensus reality.
* Links between Cyc concepts and WordNet synsets.
* NEW! 100,000+ "broaderTerm" assertions, in addition to the previous generalization (subclass) and instance (member) assertions, to capture additional relations among concepts.
* NEW Links between Cyc concepts (including predicates) and the FOAF ontology.
* NEW! Links between Cyc concepts and Wikipedia articles.
* The entire Cyc ontology containing hundreds of thousands of terms,
along with millions of assertions relating the terms to each other,
forming an ontology whose domain is all of human consensus reality.
* English strings (a canonical one and alternatives)
corresponding to each concept term, to assist with search and display.
* The Cyc Inference Engine and the Cyc Knowledge Base Browser
are now Java-based for improved performance and increased platform portability.
* Documentation and self-paced learning materials
to help users achieve a basic- to intermediate-level understanding
of the issues of knowledge representation
and application development using Cyc.
* A specification of CycL,
the language in which Cyc (and hence OpenCyc) is written.
* A specification of the Cyc API
for application development.

I've been following the Texai discussion,
but I don't see how it overcomes the shortcomings of Cyc
such that it requires a very expensive process of
encoding knowledge explicitly.
This may have seemed like a sensible approach in the 1980's
when we lacked the computing power and training data to
implement statistical approaches.
But in hindsight the programming effort was grossly underestimated,
and we still don't know.
IMHO Cyc failed because it is based on models of artificial language
unrelated to the way children learn natural language.
In artificial languages,
you have to parse a sentence before you can understand it.
In natural language,
you have to understand a sentence before you can parse it.
I entirely agree with [that] comment above.
The notion of bootstrapping in the Texai English dialog system
is to learn the meanings of the most frequently occurring words
in the definitions of its yet-to-be-learned vocabulary,
and then by reading their definitions,
learn the meanings of the remaining words
with help from a multitude of volunteer mentors.
In particular Matt said:
you have to understand a sentence before you can parse it.
An analysis of the word usage frequency in the Texai vocabulary definitions
reveals that knowing perhaps only 10000 frequently occurring words
should be enough to understand half
of the whole lexicon of 85000 English words.
I acknowledge that there must be a very expensive process of
encoding knowledge explicitly.
Like Cycorp's initial approach for DARPA's
Rapid Knowledge Formation project,
for which I was the first project manager,
Texai will use English dialog to rapidly acquire knowledge.
I hypothesize that such dialog greatly reduces the expense
of teaching new facts to the system,
and also permits a vast multitude of volunteer mentors to divide the effort:
many hands make light work.

. a family of knowledge representation languages for authoring ontologies .
OWL Characteristics
OWL provides the capability of creating classes, properties, defining instances and its operations.
* Classes:
User-defined classes which are subclasses of root class owl:Thing. A class may contain individuals, which are instances of the class, and other subclasses. For example, Employee could be the subclass of class owl:Thing while Dealer, Manager, and Labourer all subclass of Employee.
* Properties:
A property is a binary relation that specifies class characteristics. They are attributes of instances and sometimes act as data values or link to other instances.
There are two types of simple properties:
datatype and object properties.
Datatype properties are relations between instances of classes
and RDF literals or XML schema datatypes.
* Instances:
Instances are individuals that belong to the classes defined. A class may have any number of instances. Instances are used to define the relationship among different classes.
* Operations:
OWL supports various operations on classes such as union, intersection and complement. It also allows class enumeration, cardinality, and disjointness.
TDWG have an ontology for taxonomy[21].


quicksilver

11.25: news.adde/quicksilver:
. far and away the best osx app I have.
be sure to enable advanced features,
use proxy items, create custom triggers,
and read blog posts about things people are doing with it,
and watch youtube vids of other customizations...
. my response:
. from the vid you can see the basic deal is being quick.keys:
you type in very little by kybd
and it turns 2 icons into a cmd*obj pair .
. it's very smart with how you can have what app called
and how it presents various types of data .

blind assist needs both tui and aui

11.23: adde/blind assist needs both tui and aui:
. many softwares don't have an audio user interface (aui)
so the blind typically employ their own screen scraper
to convert gui to aui .
. there is some advantage to adde having a dedicated aui,
but in case people want to user their own screen scraper,
there should also be a mode that presents a
textual user interface (tui) .
. this means translating even graphical concepts
into the sort of text that makes sense being verbalized .
. one of the first problems is
how to intuitively read a table:
. in a table you'd give dimensions;
then choices about how the table is read:
eg row i loop contents;
eg, row i loop col j contents;
eg, row-col i,j: cell contents .
. they can be made aware of customizable orderings:
can hand-switch col' order before proceeding;
can order a row as ascending or descending; etc .
. an intelligence plug-in should provide
trends in table data .
. all native d'types have a visual representation
that is text
and can have also have a separate audio representation
for when the text is an abbreviation
or some other non-word
that a screen scraper couldn't figure how to say .

vnc integration

11.23: adde/vnc integration:
. the mvc and journaling arch' can make it easy to
do a vnc system;
so, try to be more inclusive of that feature .
that should be considered a design goal .

unicode saving space(s!)

11.16: adde/unicode/saving space(s!):
. in addition to the idea where
some codes are mapped to words,
a range of chars can be for word parts:
a case stmt can check for code range to chg mode:
if case not in math
and other english symbols:
if case less than ascii limit
it is word mode,
else if case less than word mode limit:
it is word`parts mode .
. the key idea of word parts mode
is the classification into {prefix root midfix suffix} .
. if there's a distinction between midfix and suffix
then can have word`part strings that are
separated not by spaces
but by part-classes -- saving a lot of space(s!) .


gui

9.4: adde/ide/assisting while preserving:
. when building the etree
how to differentiate the user's etree
from the additions adde makes doing
autocompletion or correction?
. each subtree needing change
is replaced by a menu list:
{ the original subexpr
, the other choices that adde guessed,
"(other) --- means the user can clarify
}
. when the issue is resolved
then menu is replaced with chosen etree .

11.1: adde/logging:
. when reporting a msg,
it auto'ly includes path which hyperlink to source;
it also links to process,
so clicking log entry to an active process
will activate to front the window of that process
( the source path is nothing but link,
and then adde is adding the link's name for human use) .

11.7: adde/gui/shimmering selection points:
. if window frames had larger selection points
they might get in way of data, but if shimmering ...

11.9: adde/gui/impl'details:

. the view is a tree of rect's,
since a rect may contain rects
just as a tree contains subtrees .

. all atomic types have a rectangular image .

. convert mouse coords to window coords .
. after mouse coords to window coords,
convert again(conceptually) to obj coord;
ie, the viewing window rect will often be
much smaller than the obj rect
so the obj can be scrolled
which chg's the translation from view.point to obj point .
. this way you don't have to
chg all obj rect ranges just because of a scroll .

. each rect subtree has location ranges
and a list of subrects .

. word wrapping depends on
both the line size (column width)
and word size (character height
or prefered size of other atomic types):
. small words (having few char's)
have their own word code (they are word.atoms)
while larger words are
sequences of word.atoms (word.ules)
eg, anti-dis-establish-ment-arian-ism .

. height of atom determines height of line
but title lines are larger (or rarely smaller) .

. say most lines are 10pixels high;
then obj coord/10 gives approx line #;
so then can ask for coords of line's rect;
if coord greater than cursor coord
try prev'line etc .
. when a graphic is embedded in text
. the word wrap has to divide the current column
into 2 columns:
one for the pict,
and then the remaining space holds any text that was
between pict and the next newline .
ie, a pict placed at the eol [end of line]
means don't have any text beside the pict .
. you can see this as 3 text rec's
and one graphic rect .

. a simplifying idea (though not a surely portable one)
is that gui toolkits like the one on mac
can help you with the details if you
view text emedded with pict as
really being separate windows
that are made to stay together .

11.21: news.adde/what your doing now, only with a 2nd life added:
. The web today does not optimise for human behaviour.
When we use it we are usually alone and it is not live.
In a Second Life store you can see other people,
sit with and talk to them.
I envisage we will move a lot of what we are doing on the internet today
into these more lifelike, 3D spaces.

11.7: news.addx/portability:
. mouse-still-up is appreciated but not always available .

11.16: addx/admin vs user modes:
. admin mode is there to offer a safe user mode
for untrusted or naive users .
[12.25: it would be a filter allowing the user to do
some admin activities while also preventing
the sort of damage that would only occur accidently
or from vandals .
]

communicating event loops model

11.26: news.adda/concurrency/communicating event loops model:

. ta da!:
communicating event loops model
-- the new standard in distributed prog'ing

. a postmortem distributed language-independent debugger
whose graphical views follow message flow
across process and machine boundaries.
An increasing number of developers face the difficult task of
debugging distributed asynchronous programs.
This trend has outpaced the development of adequate debugging tools and currently,
the best option for many is an ad hoc patchwork
of sequential tools and printf debugging.
This paper presents Causeway,
a postmortem distributed debugger that demonstrates a novel approach
to understanding the behavior of a distributed program.
Our message-oriented approach borrows an effective strategy from
sequential debugging:
To find the source of unintended side- effects,
start with the chain of expressed intentions.
We show how Causeway's integrated views
- describing both distributed and sequential computation -
help users navigate causal pathways as they pursue suspicions.
We highlight Causeway's innovative features which include adaptive,
customizable event abstraction mechanisms and graphical views
that follow message flow across process and machine boundaries.
[paper is gone, see here]

2009-12-29

aop (aspect-oriented programming)

11.8: adda/aop as metaprogramming:
. aop can be impl'd at the sourcecode level
combining the ideas of scm (version control)
and literate programming's weaving .

user-defined literals

11.7: adda/type"literals/symbol trees:
. most literals can be described abstractly
as symbol expressions:
literals are just given in english-symbol trees
and system sends the etree to the type'mgt,
who then has private ways of interpreting it .
. one of the things a type like that gives the system
is a literal-checking routine,
so then instead of enumerating the literals
with simple or branched enum's,
the compiler uses the literal-checking routine
during compile-time (vs the usual
where user routines are not available during compile-time) .

11.14: adda/type/modular literals:
. the type'face provides an etree description of
acceptable literals (a tall tree in
{ints, floats, strings, symbols})
. the type'body then has to provide
2 system-defined callback functions
that explain
how they want the etree literal represented as binary
and how to express its binary value back to etree .

c-- (c minus minus.org)

5.1: news.addm/c--/haskell won't be using LLVM:

In this post I will elaborate on
why some people think
C-- has more promise than LLVM
as a substrate for lazy, functional languages.
Let me start by making one thing clear:
LLVM does have support for garbage collectors.
I am not disputing that.
However, as Henderson has shown,
so does C and every other language.
The question we have to ask is not
"Does this environment support garbage collection?"
but rather
"How efficiently does this environment
support garbage collection?".
To recap,
Henderson's technique involves placing
root pointers
(the set of pointers which can be
followed to find all live data)
on a shadow stack.
Since we manage this stack ourself,
it shouldn't be a problem for the GC to walk it.
In short, each heap allocation incurs
an unnecessary stack allocation
and heap pointers are
never stored in registers for long.

Now what does this mean for
languages like Haskell?
Well, unlike programs written in
more traditional languages,
a Haskell application might very well
do between 10 and 20 million
heap allocations per second.
Writing Haskell programs is more about
producing the correct data stream
than it is about performing the right side-effects
. It's common for functions in Haskell
to manipulate data without execuing
any side-effects. (Think spreadsheets.)
This way of computing obviously requires
a very cheap method of allocation.
Performing 10 million unnecessary
stack allocations per second
would severely hurt performance,
and not having heap pointers in registers
could easily be equally devastating.

So what about LLVM?
Shouldn't the built-in GC support in LLVM
be more efficient than any cheap hack?
Well, it turns out it isn't.
The conflict between garbage collection
and optimizations haven't changed,
and neither have the solution:
disabling or bypassing optimizations.
This in turn means unnecessary stack allocations
and sub-optimal use of registers.

That LLVM'ers haven't solved the problem of
zero-overhead garbage collection
isn't too surprising
. Solving this while staying agnostic of the data model
is an open question in computer science.
It is here C-- differs from LLVM
. C-- is a research project that aims at solving
difficult problems such as supporting efficient GCs
and cheap concurrency.
LLVM, on the other hand, is an engineering project.

In conclusion:
garbage collection in LLVM incurs
unacceptable overhead,
and while C-- and LLVM do have some overlap,
the problems they're trying to solve are quite different.
Posted by David Himmelstrup at 11:52 AM
5.2: co.addm/stackoverflow.com/llvm vs c--:


I've been excited about llvm being
low enough to model any system
and saw it as promising
that Apple was adopting it,
but then again
they don't specifically support Haskell,
and some think that Haskell
would be better off with c--
adding that there's
nothing llvm can do to improve .

> That LLVM'ers haven't solved the problem of
zero-overhead garbage collection
> isn't too surprising .
> Solving this while staying agnostic of the
data model
> is an open question in computer science.
I am refering to

5.9: answer accepted:

Well, there is a project at UNSW
to translate GHC Core to LLVM
Remember: it wasn't clear 10 years ago
that LLVM would build up all the
infrastructure C-- wasn't able to
. Unfortunately,
LLVM has the infrastructure for
portable, optimized code,
but not the infrastructure
for nice high level language support,
that C-- ha(s)d.
An interesting project
would be to target LLVM from C-- ..

comment to answer:
. great answer; that was
just the blindspot-undo I was looking for!
. llvm'ers had a similar response
to the lack of concurrency support:
it's an add-on library thing .
. c-- can be ported to llvm,
meaning that llvm's gc simply won't be used .


11.9: web.adda/c--/review:


C-- is a compiler-target language.
The idea is that a compiler for a high-level language
translates programs into into C--,
leaving the C-- compiler to generate native code.
C--'s major goals are these:

C-- is not "(write-once, run-anywhere) .
It conceals most architecture-specific details,
such as the number of registers, but it exposes some.
In particular, C-- exposes the word size, byte order,
and alignment properties of the target architecture, for two reasons.
First, to hide these details would require
introducing a great deal of complexity, inefficiency, or both
-- especially when the front-end compiler
needs to control the representation of its high-level data types.
Second, these details are easy to handle in a front-end compiler.
Indeed, a compiler may benefit, because
it can do address arithmetic using integers
instead of symbolic constants such as FloatSize and IntSize.
web.adda/what do the c-- folks think of llvm?

summary:
. why isn't the llvm project working for c-- users?
llvm makes the assumption that there exists a generic assembler,
and c--, by assuming otherwise,
is not about portability:
the current version targets only the intel'86 architecture .

I do not understand the assertion that LLVM is uncooperative.
The direction LLVM takes is driven entirely by contributors.
I suggest you embrace this
and implement the necessary GC support in LLVM.
The devs would likely be happy to help out with any problems;
the team is *very* helpful.
Furthermore,
that support would open the door to implementing
other similar functional languages in LLVM,
rather making more isolated code islands.
In the long run, LHC will win *big*
by having that same code used by others
(and tested, and expanded.)
There are many things for which it is reasonable to have
NIH (Not Invented Here syndrome).
In 2009, a fast code generator is not one of them.
David Himmelstrup said...
It's unsolved in the academic sense of the word.
Solving it requires research and not engineering.
If I knew how to solve it, I definitely would add it to LLVM.
It's only unsolved in the general case.
I doubt, however, that LLVM is interested in my specific data model
(which is in a state of flux, even).
what I want to do
can't yet be done by any general-purpose compiler.
Chris Lattner
Sun, 17 Dec 2006 12:45:42 -0800
LLVM is written in C++, but, like C--, it provides first-class support for
intermediate representation written as a text file (described here:
http://llvm.org/docs/LangRef.html), which allows you to write your
compiler in the language that makes the most sense for you.

In addition to the feature set of C--, LLVM provides several useful pieces
of infrastructure: a C/C++/ObjC front-end based on GCC 4, JIT support,
aggressive scalar, vector (SIMD), data layout, and interprocedural
optimizations, support for X86/X86-64/PPC32/PPC64/Sparc/IA-64/Alpha and
others, far better codegen than C--, etc. Further, LLVM has a vibrant
community, active development, large organizations using and contributing
to it (e.g. Apple), and it is an 'industrial strength' tool, so you don't
spend the majority of your time fighting or working around our bugs :).

Like C--, LLVM doesn't provide with a runtime (beyond libc :) ), which can
be a good thing or a bad thing depending on your language (forcing you to
use a specific runtime is bad IMHO). I would like to see someone develop
a runtime to support common functional languages out of the box better
(which language designers could optionally use), but no-one has done so
yet.

OTOH, C-- does have some features that
LLVM does not yet have first class support for.
LLVM does not currently support for generating efficient code
that detects integer arithmetic overflow, doesn't expose the
rounding mode of the machine for FP computation, and does not yet support
multiple return values, for example.

While it is missing some minor features, one of the most important
features of LLVM is that it is relatively easy to extend and modify. For
example, right now LLVM's integer type system consists of signed and
unsigned integers of 1/8/16/32 and 64-bits. Soon, signedness will be
eliminated (giving us the equivalent of C--'s bits8/bits16/bits32/bits64
integer types) and after that, we plan to generalize the integer types to
allow any width (e.g. bits11). This is intended to provide better support
for people using LLVM for hardware synthesis, but is also useful for
precisely constrainted types like those in Ada (i.e. it communicates value
ranges to the optimizer better).

> I think the three new things I'd like to see out of C-- are (in rough
> order of priority):
> 1) x86-64 support
> 2) the ability to move/copy a stack frame from one stack to another, and
> 3) Some form of inline assembler without having to go to C (necessary for
> writting threading primitives in C--)

LLVM provides #1 and #3 'out of the box'. #2 requires runtime
interaction, which would be developed as part of the runtime aspect.

For me, one disappointment of the LLVM project so far is that we have not
been very successful engaging the functional language community. We have
people that use LLVM as "just another C/C++/Objc compiler", we have people
that reuse the extant front-ends and optimizer to target their crazy new
architectures, and we have mostly-imperative language people (e.g. python)
using LLVM as an optimizer and code generator. If we had a few
knowledgable people who wanted to see support for functional languages
excel, I believe LLVM could become the premier host for the functional
community.

If you are considering developing aggressive new languages, I strongly
recommend you check out LLVM. The llvmdev mailing list
is a great place to ask questions.
2006
> For me, one disappointment of the LLVM project so far
is that we have not been very successful engaging the
functional language community.

If you want to engage functional programmers,
you're not publishing in the right places.
PLDI gave up on functional programming long ago,
(Programming Language Design and Implementation)
and therefore
many functional programmers
no longer pay much attention to PLDI.

. the largest stumbling blocks for the industry adoption of
languages like Haskell and c--
is the fact that it still markets itself as
some mathematics/computer science professor's little experimental project.
I feel C-- still suffers a bit from "professor's pet project" syndrome a bit .

> - GCC: Still quite complicated to work with, still requires you to write
> your compiler in C. Implementing a decent type system is going to be
> interesting enough in Ocaml or Haskell, I'll pass on doing that in C.
> Which means a hybrid compiler, with a lot more complexity. Also,
> functional languages are definately still second class citizens in GCC
> world- things like tail call optimization are still not where they need to
> be. Which means implementing an optimization layer above GCC to deal with
> tail calls. Plus you still have all the run time library issues you need
> to deal with- you still need to write a GC, exception handlers, threading,
> etc. On the plus side, you do get a lot of fancy optimizations- SSE use,
> etc.
Where functional programming really shines, I think,
is programming in the large- word processors and CAD/CAM systems etc.
It's when you start dealing with things like maintainance
and large scale reuse and multithreading that
> functional programming really spreads it's wings and flies.
And, unlike scripting/web programming, performance really does matter.

>
> - Use C as a back-end. You're writing your own runtime again, tail
> recursion is poorly supported again, and a lot of function programming
> constructs don't map well to C.

> - Use C--. You still have to implement your runtime, but you're basically
> going to have to do that anyways. You get decent optimization, you get to
> write your compiler in the language you want to, and functional languages
> are first class languages.
>
> Of these options, I think C-- (assuming it's not a dead project) is the
> best of the lot. Even if it needs some work (an x86-64 back end, the
> ability to move a stack frame from one stack to another), it'll be no more
> work than any other option. My second choice would be GCC as a back end,
> I think. But the point here is that the fundamental niche C-- fills is
> still usefull and needed.
>

LLVM is very C-ish,
and makes it rather awkward to have
procedure environments and goto's out of procedures

Oct 2008 01:45:11 -0700
| Most of our users have reported that it is very easy to adapt a legacy
| compiler to generate C-- code, but nobody has been willing to attempt
| to adapt a legacy run-time system to work with the C-- run-time interface.

I don't know whether this'll be any use to anyone except us,
but we're using C-- like crazy inside GHC (the Glasgow Haskell Compiler).
But not as an arms-length language.
Instead,
inside GHC's compilation pipeline we use C-- as an internal data type;
and after this summer's work by John Dias,
we now have quite a respectable
story on transforming,
and framework for optimizing,
this C-- code.
Since some of the runtime system is written in C--,
we also have a route for parsing C-- and compiling it down the same pipeline.
All that said,
this is a *GHC specific* variant of C--.
It does not support the full generality of C--'s runtime interface
(it is specific to GHC's RTS), nor is it intended as a full C-- implementation.
In its present state it's not usable as a standalone C-- compiler.
Still, it is a live, actively-developed implementation
of something close to C--, and so might be of interest to some.

The OCaml Journal has published around 40 articles now. The most popular and
third most popular articles are both about LLVM. So I don't think it is
correct to say that "functional language people don't like LLVM". Indeed, I
thought I was a kook for trying to write a compiler for a functional language
using LLVM until I mentioned it to the OCaml community and half a dozen
people stepped forward with their own alternatives. :-)








coercive polymorphism

11.9: adda/oop/declaring coercive polymorphism as {supertyping, subtyping}:
. in csp types (coercive subtype polymorphism), as exemplified by numbers,
the syntax has the choice of having either a type declare its subtypes,
or have each subtype declaring which supertype it inherits from .
. in a supertype like number,
the subtypes are known in advance, and so the membership is closed .
. perhaps the design should be unifying this syntax choice
. some csp types do need to add sublcasses later
so simplest syntax would not be one where
the supertype was including known subtypes .
pos:
. rather than keep meanings attached by conventional oop terminology,
it's better to stay with classic meanings of word components
eg, keep the definition of class the same way it's used in logic,
and then use classic terminology to create your own terminology
that is consistent and giving a diff'name for each big idea .

modular coercion

11.9: adda/oop/modular coercion

. one way for superclasses like number to facilitate modularity
is to provide a common intermediate type (cit):
eg, number.class can describe record variants for all of its subtypes:
complex.rectangular.coord's =
( co#real.real
, co.imaginary.real
)
complex.polar.coord's = ()
real = { quotient, float.symbol, }
quotient = ( dividend.int, divisor.int )
int = {0,1, ... }
float = {0, 1/inf, 1/(inf-1), ...., 1, 1+1/inf, ... inf }
float.symbol = { pi, e, ... },
.. the supertype's cit is giving the ways to describe value literals
in terms of lists, symbols, and integers .
. then subtypes can implement binary versions of these symbolic literals
in any way they choose .
. this way they don't need to know each other's impl'details
in order to supply conversions between subtypes;
they simply convert their subtype to or from the symbolic value
-- it's not the quickest,
but it's definitely an improvement over char'string conversions
if modularity between subtypes is required .
]

frame-based type systems

11.8: adda/oop/frame-based type systems:

. a critical clarifier is how a frame-based subsystem
can be used by the type-checking subsystem .
def's:
. the purpose of the kb system is to support programs that are
acting on the domain as it is represented by the database .
. the type system needs to find a symbol's intended uses or roles,
and make sure it's not used in unintended ways .


web.adda/{frame lang, metaprogramming, quasiquote, Component-based}:



FLORA-2 is an advanced object-oriented knowledge base language
and application development environment.
The language of FLORA-2 is a dialect of F-logic
with numerous extensions,
including meta-programming in the style of HiLog
and logical updates in the style of Transaction Logic.
FLORA-2 was designed with extensibility and flexibility in mind,
and it provides strong support for modular software design
through its unique feature of dynamic modules.
. it's based on XSB
a Logic Programming and Deductive Database system .
a few open-source projects that use XSB:
# Flora is an object-oriented language
for building knowledge-intensive applications, which is based on the ideas of
F-Logic, HiLog and Transaction Logic.
# XMC is the main product of the Logic-based Model Checking project.
# Logtalk is an open source object-oriented logic programming language
that can make use of multi-threading.
# OpenSHORE is a hypertext repository
that stores data about and described by documents.
Access to this information is provided as hypertext.

OpenSHORE (Semantic Hypertext Object Repository)
an XML based Semantic Document Repository (SDR)
with a free definable meta model
that builds up a semantic network from sections
and relations in documents.
OpenSHORE is an hypertext repository:
The repository stores objects that appear in documents
together with their relations in the semantic net.
Hypertext navigation follows these relations
in the semantic net.

Frame-based systems represent domain knowledge
. frames are a notion originally introduced by Marvin Minsky (1975)
in the seminal paper
"A framework for representing knowledge" .
. prior to that, domain knowledge was represented by
rule-based and logic-based formalisms .

Minsky proposed organizing knowledge into chunks called frames.
These frames are supposed to capture the essence of
concepts or stereotypical situations,
by clustering all relevant information for these situations together.

Important descendants of frame-based representation formalisms
are description logics
that capture the declarative part of frames using a logic-based semantics.
Most of these logics are decidable fragments of first order logic
and are very closely related to other formalisms
such as modal logics and feature logics.

. Minsky's 1975 ideas were evolved into FRL and KRL
(Bobrow and Winograd 1977).
KRL addressed almost every representational problem
discussed in the literature.
-- a very rich repertoire and almost unlimited flexibility.

Features that are common to FRL, KRL, and later frame-based systems
(Fikes and Kehler 1985) are:
(1) frames are organized in (tangled) hierarchies;
(2) frames are composed out of slots (attributes)
for which fillers (scalar values, references to other frames or procedures)
have to be specified or computed; and
(3) properties (fillers, restriction on fillers, etc.)
are inherited from superframes to subframes in the hierarchy
according to some inheritance strategy.
. from math, a monotone function is order-preserving:
x less than y implies
f(x) less than f(y),

. monotonic logic is a formal logic whose consequence relation
is monotonic
meaning that adding a formula to a theory
never produces a reduction of its set of consequences.
Intuitively,
monotonicity indicates that learning a new piece of knowledge
cannot reduce the set of what is known.
A monotonic logic cannot handle various reasoning tasks
such as reasoning by default
(consequences may be derived only because of
lack of evidence of the contrary),
abductive reasoning
(consequences are only deduced as most likely explanations)
and some important approaches to reasoning about knowledge
(the ignorance of a consequence must be retracted
when the consequence becomes known)
and similarly belief revision
(new knowledge may contradict old beliefs).

An example of a default assumption is that the typical bird flies.
As a result, if a given animal is known to be a bird, and nothing else is known,
it can be assumed to be able to fly.
The default assumption must however be retracted if it is later learned
that the considered animal is a penguin.
This example shows that a logic that models default reasoning
should not be monotonic.

a clash of intuitions: the current state of nonmonotonic multiple inheritance
ijcai.org/Past%20Proceedings/IJCAI-87-VOL1/PDF/094.pdf
frame lang's are found in the context of inheritance networks
. these are building relations, eg the assertion prop(x)
is represented as a positive link x -> (class for which prop is true)
while a complementary assertion can have a negative link
nonmonotonic system permits exceptions to inherited properties
eg, neg'links can override pos'links .

. the type systems of oopl's are unipolar (positive linking only) monotonic
but frame systems (eg FRL) are unipolar non-monotonic .

homogeneous inheritance system (unlike hetero)
are one or the other but not both of
{ monotonic: all links are strict
, thoroughly nonmonotonic: links are defeasible
} the concern of this paper is
non-mono homo multiple inheritance systems
. a property of many logic systems
that states that the hypotheses of any derived fact
may be freely extended with additional assumptions.
Any true statement in a logic with this property
continues to be true, even after adding new axioms.
Logics with this property may be called monotonic,
to differentiate them from non-monotonic logic.
. typing systems that work similarly to frame'based kb systems
are termed prototype-based languages that use delegation:
the language runtime is capable of dispatching the correct method
or finding the right piece of data simply by following a series of
delegation pointers (from object to its prototype)
until a match is found.
As such, the child object can continue to be modified
and amended over time without rearranging the structure of its associated prototype
as in class-based systems.
It is also important to note that not only data but also methods
can be added or changed.
For this reason, most prototype-based languages
refer to both data and methods as "slots".

Metaobjects are lightweight classes that have but a single instance
(the metaobject's referent)
A metaobject contains an explicit reference to its referent.
An object, working in tandem with a metaobject,
is functionally similar to a SELF object that describes its own behavior.
. any entity that manipulates, creates, describes, or implements other objects.
The object that the metaobject is about is called the base object.
Some information that a metaobject might store
is the base object's type, interface, class, methods,
attributes, variables, functions, control structures, etc.
A metaobject protocol (MOP)
is an interpreter of the semantics of a program that is open and extensible.
Therefore, a MOP determines what a program means
and what its behavior is,
and it is extensible in that a programmer (or metaprogrammer)
can alter program behavior by extending parts of the MOP.
The MOP exposes some or all internal structure of the interpreter
to the programmer.
The MOP may manifest as a set of classes and methods
that allow a program to inspect the state of the supporting system
and alter its behaviour.
MOPs are implemented as object-oriented programs
where all objects are metaobjects.
ompiling process, but do not exist when the program is running.
One of the best-known runtime MOPs is the one described in the book
The Art of the Metaobject Protocol (often referred to as AMOP);
One example use of a MOP is to alter the implementation of multiple inheritance.
A recurring issue is how to resolve conflicting slots and methods of the superclasses.
Typically, language designers select one solution,
and language users must live with it.
A metaobject protocol is one way to implement aspect-oriented programming languages.
Many of the early founders of MOPs, including Gregor Kiczales
have since moved on to be the primary advocates for
aspect-oriented programming.
http://en.wikipedia.org/wiki/Aspect-oriented_programming
Aspects emerged out of object-oriented programming and computational reflection.
AOP languages have functionality similar to, but more restricted than
metaobject protocols.
Aspects relate closely to programming concepts like
subjects, mixins, and delegation.
Other ways to use aspect-oriented programming paradigms include
Composition Filters
and the hyperslices approach.
Since at least the 1970s, developers have been using forms of
interception and dispatch-patching that are similar to
some of the implementation techniques for AOP,
but these never had the semantics that
the crosscutting specifications were written in one place.
Designers have considered alternative ways to achieve separation of code,
such as C#'s partial types,
but such approaches lack a quantification mechanism
that allows reaching several join points of the code
with one declarative statement.
an object-oriented software paradigm
in which the state (fields) and behavior (methods) of objects
are not seen as intrinsic to the objects themselves,
but are provided by various subjective perceptions (ÒsubjectsÓ) of the objects.
The term and concepts were first published in September 1993
in a conference paper[1] which was later recognized as being
one of the three most influential papers to be presented at the conference
between 1986 and 1996[2]. As illustrated in that paper,
an analogy is made with the contrast between the philosophical views of Plato and Kant
with respect to the characteristics of ÒrealÓ objects,
but applied to software ones.
For example, while we may all perceive a tree as having a
measurable height, weight, leaf-mass, etc.,
from the point-of view of a bird,
a tree may also have measures of relative value for food or nesting purposes,
or from the point-of-view of a tax-assessor,
it may have a certain taxable value in a given year.
Neither the bird's nor the tax-assessor's additional state information
need be seen as intrinsic to the tree,
but are added by the perceptions of the bird and tax-assessor,
and from Kant's analysis,
the same may be true even of characteristics we think of as intrinsic.

Like aspect-oriented programming,
subject-oriented programming, composition filters, feature oriented programming
and adaptive methods
are considered to be aspect-oriented software development approaches.

Further advances in FOSD arose from recognizing the following facts:
Every program has multiple representations
(e.g., source, makefiles, documentation, etc.)
and adding a feature to a program could elaborate each of its representations
so that all representations are consistent.
The language in which the metaprogram is written is called the metalanguage.
The language of the programs that are manipulated is called the object language.
The ability of a programming language to be its own metalanguage
is called reflection or reflexivity.

Having the programming language itself as a first-class data type
(as in Lisp, Forth or Rebol) is also very useful.

Generic programming invokes a metaprogramming facility within a language,
in those languages supporting it.

generative programming vs incremental compilation vs modifiable at runtime:
. the compiler is a metaprogramming tool;
if incremental compilation is available to a program
then metaprogramming can be performed without actually generating source code.


In Lisp metaprogramming, the quasiquote operator (typically a comma)
introduces code that is evaluated at program definition time
rather than at run time.
The metaprogramming language is thus identical to the host programming language, and existing Lisp routines can be directly reused for metaprogramming if desired.

. in lisp a quote is a hard quote,
meaning there is no attention payed to unquote as an escape .
. the quasiquote does allow the unquote to work as an escape from quote .


Accord Programming Framework (pdf)
The Accord programming framework supports the development of
autonomic Grid applications .
Accord enables runtime composition and autonomic management
of these components using dynamically defined rules.

. current (2004) frameworks have interaction patterns
that can't be defined dynamically .
. compon'based frameworks don't have
the context-aware self-mgt of individual compon's .
. service'based models implicitly assume
that context does not change during app`lifetime .
. dynamic context requires autonomic computing,
capable of managing themselves using high-level rules
with minimal human intervention.

The Accord programming framework consists of 4 concepts.
* an application context
that defines a common semantic basis for the application.
* the definition of autonomic components
as the building blocks of autonomic applications.
* the definition of rules and mechanisms for
the dynamic composition of autonomic components.
* an agent infrastructure
to support rule enforcement to realize self-managing
and dynamic composition behaviors.

Accord builds on the AutoMate middleware infrastructure
that provides the essential services required to support
the development and execution of autonomic applications.
( naming service,
discovery service,
lifecycle management service,
and registration service.
)

the right thing vs C

11.7: adda/the right thing vs c:
. an mit scheme (lisp) fan railed against c as a virtual virus,
pointing out that it was easy on compiler writers
and writing this one simple compiler
gave access to libraries of unix code
-- never mind that the whole c-unix way of doing things
was not making it easy to catch app'designer errors .
. how could they pick c over lisp,
when scheme compilers were so mature, just as efficient,
and provided libraries that were much safer than unix ?

paths from adt's to concrete bindings

11.7: adda/oop/paths from adt's to concrete bindings:
. it was pointed out that [everything is an object]
had been a selling point of some oopl's;
whereas, others had insisted that pure oop was inefficient .
. oop is a great way to simplify things for the app'designer;
but we don't need to make things easy on the compiler writer!
. the thing that makes oop`objects inefficient
is that the compiler is given no path to efficiency:
what can be statically determined about the object?
ie,
can it be proven that the symbol has
only the values of, and is accessed only by the operations of,
a binding to an efficient interface?
eg,
on most systems there is a binding to
hardware support for integer arithmetic
and if you can show that for some subscope,
a number really will be an integer,
than all calls can be bound to that hardware binding
rather than to the abstract type'mgt .
. of course for integers
this will be the first thing that adda does,
but it needs a way to do this for
an arbitrary binding .
. each type'mgt must have a way of expressing
a path to concrete types .
. imagine 2 interfaces:
one is all abstract, and the code is open:
the abstract procedures show how it
checks type.tags and value ranges
and maps these to the type'mgt's concrete calls library .
. the point of having your own language is,
given any platform and it's special mix of features,
be able to describe those features in your language .
. that doesn't mean a hal (hardware abstraction level)
has to be inefficient,
rather,
by asking the compiler to do more work (after the rad, of course)
programming to interfaces can be just as efficient
(in the best case of course,
compiler complication is a trade off with stability) .

booking oop with Cardelli

. oop has extremely poor modularity properties
with respect to class extension and modification.
For example, it is easy to override a method
that should not be overridden,
or to reimplement a class in a way that
causes problems in subclasses.
Other large-scale development problems include
the confusion between classes and object types,
which limits the construction of abstractions,
and the fact that subtype polymorphism
is not good enough for expressing container classes .

Here are some things that could or should
be done to sofware engineering with respect to oop:

* Economy of execution.
Much can be done to improve the efficiency of method invocation
We also need to design type systems that can
statically check many of the conditions that now require
dynamic subclass checks.
* Economy of compilation.
the separate compilation of (sub)classes,
without resorting to recompilation of superclasses
and without relying on "private" information in interfaces.
* Economy of small-scale development.
improve error detection and the expressiveness of interfaces.
* Economy of large-scale development.
formulating and enforcing inheritance interfaces:
the contract between a class and its subclasses
(as opposed to the instantiation interface
which is essentially an object type).
Parametric polymorphism is beginning to appear
and its interactions with object-oriented features
need to be better understood.
Subtyping and subclassing must be separated.
Similarly, classes and interfaces must be separated.
* Economy of language features.
Prototype-based languages [provide simpler,
more composable features with orthogonality] .
How can we design powerful engineering
but also simple and reliable engineering?