2010-12-31

survey of programming architectures

11.15: web.adda/oop/architectures:

the categories of inheritance:

# type clustering (inclusion polymorphism):

. numbers are the classic type cluster;
that type's major subtypes have overlapping interfaces;
and they need a supertype to coordinate biop's
(binary operations; a function with 2 arg's;
eg, addition's signature is: +:NxN->N )
whenever the param's have unmatched subtypes
(eg, RxC->C, ZxN->Z, /:NxN->Q, ...).

type cluster/supervision models:
#coordinated:
. the set of polymorphic subtypes is fixed,
and the supertype knows how to convert between them;
because,
it knows the data formats of all its subtypes .
# translated:
. the supertype provides an all-inclusive
universal data format;
eg, numbers -> complex .
. all subtypes convert between that format
and their own .

type cluster/subtypes must include range constraints:
. range constraints are essential for efficiency
as well as stronger static typing;
because, range limits are what allow
direct use of native numeric types .
. typical native types include
{N,Z,R}{8,16,32,64,128} .

# type classing (Subtype polymorphism):
. declaring a type is a member of a class,
and is compatable with that class
by inheriting its interface;
the new type is then usable
anywhere the inherited class is . [12.31:
. the type class is defined by its interface;
any type following that interface
is considered a member of that class .
. it's not about sharing code by extension;
it's organizing hierarchies of compatability .]

# type cluster combined with type classing:
. the subtypes of a type cluster
can be type classed; eg,
a dimensioned number could inherit from int;
and then to coordinate with the numeric supertype
it uses functionality from int.type
to deal with these messages:
{ what is your numeric subtype?
, your numeric value?
, replace your numeric value with this one
} .
. with just that interface,
any subclass of any numeric subtype
can be used in any numeric operation . [12.31:
. all self-modifying operations ( x`f)
can be translated as assignments (x`= f(x));
so then the inherited subtype
provides all the transform code .]

#type classing without clustering:
11.20:
. without type clustering;
what does type classing do then?
are biop's supported? polymorphism?
. historical reasons for inheritance:
# polymorphism
# type compatability
# reuse of work .
. you want to extend a type's
structure and functionality,
not interfere with its code base,
and still be useful everywhere your ancestors are .

. in the popular oop model,
the inherited work is reused by
adding to an inherited type's
functionality and instance var'space
(creating a polymorphism in the type).
. there's type compatability because
the obj' can handle all the ancestor's
unary and self-modifying functions;
but, popular oop approaches differ on
how biop's are handled .

. the classic, math'al oop uses clusters, [12.31:
which can handle biop's because the supertype
has limited membership to its type class
and can thus know in advance
what combinations of subtypes to expect
among a biop's pair of arg's .
. in a system without clustering's
closed class of subtypes
then there is no particular type to handle
the coordination of mixed biop arg's .
(that mix can consist of any types in
one arg's ancestors, or their descendents).]

. if subtypes can redefine a biop,
then a biop's method might be arbitrated by:
# nearest common ancestor:
the arg' set's nearest common ancestor type;
# popular:
the first arg determines the method;
# translation:
. an inheritable type has a universal format
which inheritors convert to,
in order to use the root's biop method .]

# incremental composition:
. it can be simplifying to describe a type
in terms of how it differs from other types;
this case includes anything not considered to be
type clustering or subclassing .
. revisions such as removing inherited parts
can preclude type compatability;
in such cases, compatability could be declared
with the use of a conversion map .
. incremental composition provides
module operators for building in ways
familiar to lisp users:
code can read other code, modify it,
and then use it as a module definition .
[11.20:
. with incremental composition,
any inheritance behaviors should be possible;
but the built-in inheritance should be
simple, classic type clustering and classing
as described above .
. the directions of popular oop
are not helping either readability or reuse;
esp'y unrewarding is the ability to
inherit multiple implementations
that have overlapping interfaces .]

#frameworks:
11.15:
. generic types can implement frameworks:
a type is an interface with all code supplied;
a generic type
leaves some of its interface undefined
or optionally redefinable,
with the intent that parameter instantiations
are customizing the framework;
eg,
a typical gui framework would be impl'd as
a generic task type;
so that creating an obj' of that type
initiates a thread of execution
that captures all user input
and responds to these events by
calling functions supplied by the
framework's customizing init's .]

adda/oop/value types:
11.16:
. the classic use of oop is type clustering
as is done for numerics:
it provides users of the numeric library
with an effortless, automated way
to use a variety of numeric subtypes
while also employing static typing,
and enjoying any enhanced readability or safety
that may be provided by that .
. coercions and range checks can all be
tucked under the hood,
without requiring compliance from clients .
. this automation is possible because
the designer of a type cluster's supertype
is using subtype tags to determine
each value's data format .

. the supertype module is also
the only place to coordinate
multiple param's having unmatched subtypes;
after one param' is coerced to match the other,
operations involving matched binary subtypes
are then relegated to subtype modules .

11.19: intro to value`type:
. static typing generally means
that a var's allowed values are confined to
one declared type,
and perhaps also constrained;
eg, limited to a range of values,
or a specific subtype .
. if that declared type is a type cluster,
it's values will include a type tag
for use by the supertype module,
to indicate which of its subtype modules
is responsible for that data format .

. type.tags are sometimes seen as a way to
replace Static typing with ducktyping
(where the tag is used at run-time
to check that the given value has a type
that is compatible with the requested operation).
. type clustering, in contrast to ducktyping,
is static typing with polymorphism
(statically bound to the cluster's supertype);
and there, the purpose of the type.tag
is merely to allow the supertype module
to support a variety of subtypes,
usually for the efficiency to be gained
from supporting a variety of data formats;
eg,
if huge complex numbers won't be used,
then a real.tag can indicate there is
no mem' allocated for the imaginary component;
or,
if only int's within a certain range will be used,
then the format can be that of a native int,
which is considerably faster than non-native formats .

. the value's subtype (or value`type)
is contrasted with a var's subtype
to remind us that they need not be equal
as long as they are compatable;
eg,
a var' of type"real may contain
a value of type"integer;
because they are both subtypes of number,
and the integer values are a
subset of the real values
(independent of format).

. the obj's subtype puts a limit on
the value`types it can support;
eg,
while a var' of subtype"R16 (16bit float)
can coerce any ints to float,
it raises an exception if that float
can't fit in a 16-bit storage .

. another possibly interesting distinction
between var' types and value`types
is that value`types have no concept of
operating on self; [11.19:
a unary operation over a value`type
doesn't involve any addresses,
and there is nothing being modified .
. while popular oop has a var`address
modify itself with a msg,
eg, x`f;
classic oop would say that was an
assignment stmt plus a unary operation:
x`= x`type`f(x) -- shown here fully qualified
to indicate how modularity is preserved:
the function belongs to x's type .]

. adda can also enforce typing between
unrelated types like {pure number, Meters},
but the system depends on supertype designers
to correctly handle their own subtypes .

. in addition to the distinction between
{library, application} programmers,
there is also kernel mode:
the adda run-time manages all native types
so that any code that
could be responsible for system crashes
is all in one module .

10.23: news.adda/compositional modularity:
11.14: Bracha, Lindstrom 1992`Modularity meets Inheritance
We "unbundle" the roles of classes
by providing a suite of operators
independently controlling such effects as
combination, modification, encapsulation,
name resolution, and sharing,
all on the single notion of module.
All module operators are forms of inheritance.
Thus, inheritance not only is
not in conflict with modularity in our system,
but is its foundation.
This allows a previously unobtainable
spectrum of features
to be combined in a cohesive manner,
including multiple inheritance, mixins,
encapsulation and strong typing.
We demonstrate our approach in a language:
Jigsaw is modular in two senses:
# it manipulates modules,
# it is highly modular in its own conception,
permitting various module combinators to be
included, omitted, or newly constructed
in various realizations .
10.23: Banavar 1995`compositional modularity app framework:
11.14:
. it provides not only decomposition and encapsulation
but also module recomposition .
. the model of compositional modularity is itself
realized as a generic, reusable software arch',
an oo-app framework" Etyma
that borrows meta module operators
from the module manipulation lang, Jigsaw
-- Bracha 1992`modularity meets inheritance .

. it efficiently builds completions;
ie, tools for compositionally modular system .
. it uses the unix toolbox approach:
each module does just one thing well,
but has sophisticated and reliable mechanisms
for massive recomposition .
. forms of composition:
#functional: returns are piped to param's;
#data-flow: data filters piped;
#conventional modules: lib api calls;
# compositional modularity:
. interfaces and module impl's
operated on to obtain new modules .

. oop inheritance is a form of recomposition;
it's a linguistic mechanism that supports
reuse via incremental programming;
ie, describing a system in terms of
how it differs from another system .
. compositional modularity evolves
traditional modules beyond oop .

. that compositional modularity
sounds interesting,
what's the author been up to recently?
reflective cap'based security lang's!

Bracha 2010`Modules as Objects in Newspeak:
. a module can exist as several instances;
they can be mutually recursive .
. Newspeak, a msg-based lang has no globals,
and all names are late-bound (obj' msg's).
. programming to an interface (msg's vs methods)
is central to modularity .

. it features cap'based security:
# obj's can hide internals
even from other instances of the same class;
# obj's have no access to globals
thus avoiding [ambient authority]
(being modified by external agents) .
# unlike E-lang, Newspeak supports reflection .

Newspeak handles foreign functions
by wrapping them in an alien obj,
rather than let safe code
call unsafe functions directly .
--. this is the equivalent of SOA:
whatever you foreigners want to do,
do it on your own box (thread, module)
and send me the neat results .

editor's uncluttered ascii mode

12.26: adde/ascii mode:
news:
"( Many developers want ASCII-based editors,
in which case the syntactic clutter of
varying annotations
can rapidly become overwhelming. )
--
. adde makes it easier to use graphics
and other coded features,
by folding in code like html does,
but, for the sake of ascii tools,
it also needs to optionally provide
a file system that removes clutter
by putting the code into
files separate from the ascii .
. after modification by a foreign editor
the adde editor needs a way to know
how to merge the code with the text .
. for this it can use ascii bookmarks
which follow the head-body pattern:
their title indicates what they represent
and they have a pointer into the code file
showing where the body of code is
that implements that graphic .

. if adde finds a modified ascii file,
it checks for missing bookmarks
in order to delete the corresponding bodies .

. the code file is ascii too, of course,
but none of the words are about content
rather they give details of graphics .
. a foreign editor can see
the body of any given bookmark
by searching for the bookmark's pointer
within the code file .

. assuming that any given project folder
will have numerous text files
that may all need ascii-code separation,
one simple way to manage this separation
is to put all the code files
in a subfolder named code;
so that they can have
exactly the same name as the
ascii files they belong to;
so, if working on file: ./x,
--[the ./ means current folder ]--
then a search in ./code/x
will find the body of a bookmark .

2010-12-30

optional ducktyping

adda/oop/optional ducktyping:
12.28: syntax:
. instead of optional typing,
what about optional ducktyping?
. the easy way to declare a local
is to have it's first use describe it's type;
ie, its declaration can double as its first use;
eg, v.t -- declares "(v) to be of type"(t);
and thereafter, the var's type ext is optional;
eg, v -- the var whose full name is v.t .
. in the case of ducktyping,
the first use has the null type;
eg, ( var. ) -- the dot at the end
assures the parser that you didn't
forget to specify the var's type;
rather, you want a ducktype compiled;
ie, the parser is to find the list of
operations that were applied to that var,
and declare that to be the var's interface;
eg, f(var.); g(var) .
-- that says var is a ducktype,
and its interface is: f(), g() .

12.28, 12.30: other:
. how are typed and ducktyped obj's interacting?
both {types, ducktypes} have type.tags;
but whereas a ducktyped object
carries the type.tag with its value,
the statically typed var stores its tag
in the local symbol table
of the subprogram where it resides;
ie, the symbol table storing a ducktyped var
will declare its type as null,
meaning to be delivered at run time .
. the type.tag is a pointer to the var's type mgt
which will have a dictionary of operations .
. during run time, the ducktype's type mgt is then
checked for having all the expected operations .

# a typed obj assigned to a ducktype:
needs to be wrapped in a type.tagged box,
ie, ducktyped obj = (value, type.tag);
and statically typed obj = (value) .

# a ducktyped obj assigned to a typed obj:
needs to be unboxed before assignment
(the type.tag is compared to the destination's expected type;
and, if compatable, the type.tag is stripped out,
and the remaining value assigned).
. the assigned ducktyped obj
then still virtually has the same type.tag,
only its statically in the symbol table
rather than dynamically beside its value .

review of the current stacking system:
. static types can help efficiency:
if a parameter is statically typed,
then the type's mgt can assist in
the typical amount of stack space
needed by the union of its variants
otherwise, the system can only plan on stacking
# a ptr to trailer (local heap); or,
# a ptr to readonly .

12.30: binary operations (biops):
. just like (statically) typed var's,
ducktyping can involve class clustering
where a biop's arg's can be ducktypes;
in these cases, the run-time is checking
that both arg's have a common supertype .
. an extension of that idea is where
any number of arg's can be accommodated:
# by checking all for a common supertype;
# by checking every arg's type for the presence of
a function with a compatible signature .

vector literals syntax

12.28: adda/dstr/vector literals:
. as with the declare's syntax
the opening angle brackets of the
vector literal formed with angle brackets
could be distinguished from less-than
by following it with a dot .
. ascii tokens are useful even with unicode
because, keyboard's can quickly dish up ascii .
. the parser converts ascii to unicode's:
less-than & dot -> opening vector bracket (U+27E8)
--. the closing pair is: U+27E9 .
related trivia:
Miscellaneous Mathematical Symbols:
U+27C0 ... U+27EF, U+2980 ... U+29FF .
'Miscellaneous Technical' 2300 ... 23FF:
(has a left-pointing angle bracket U+2329) .
--. ascii codes for {math relations, brackets}:
U+003C Less-than sign
U+003E Greater-than sign .
. unicodes that mac os x supports .

2010-12-29

std input and gui interaction drivers

12.27: adda/arch/std`input:
. in my idealized expression of the mvc pattern,
the designers of apps should never have to
program the details of the user's gui;
they simply write to data structures,
and the user`agents have their favorite ways of
expressing structures graphically;
but what if the app designer
wants to contribute new ways of
graphical interaction ?
. such gui drivers shouldn't get direct access
to the mouse and keyboard (input events);
instead, like unix is designed,
drivers get gui events from std`input;
and, the user`agent is free to
feed the gui driver either the actual input events
or the user`agent's version of input events .
[12.29: for modularity:
. the gui driver's usefulness to some users
may depend on the user`agent's ability to
filter the user's input,
or simulating it with an algorithm;
eg, an adaptation for those with the shakes
might be to filter duplicate inputs .
adde plug-in's:
. the general way to develope the gui
is for adda, the compiler,
to be aware of plug-in's for adde, the editor;
a gui driver is associated with a typemark,
and then when adde is displaying
items of that type,
it lets the associated gui driver
decide how inputs translate into
modifying a value of that type .
. there can also be event-triggered gui drivers
(vs triggered by accessing a certain type)
eg, a selection lasso driver
would be triggered by a mouse-down drag,
and instead of stopping at mouse-up,
it would keep accumulating selection rectangles,
until, say, the enter key was pressed .]

declare block syntax

12.28: adda/cstr/declare:
. with a declare block represented by
this: .(BODY)
-- a dot-prefixed parenthetical --
it's immediately confused with a
type description literal;
instead, the syntax should be
a special case of function literal:
(.(x) BODY)
by just having a null parameter:
(. BODY);
function literals will eval
even when parameterized
-- if complete defaults are given
and the literal is not quoted --
and can thereby double as a declare block .

. the arg list can also be a
symbol returning type record:
(.myrec BODY);
[12.29: the point being that
anywhere a record def' literal can go,
a symbol for the same should be allowed;
but what about the syntax for functions?
f(x).returntype
-- that sets a precedent for
parenthetical literals not (yet) being
replaceable by a symbol;
but in that case,
the ().type pattern is there to
intuitively identify a function
-- f(x) = name & parenthetical literal .
. when expressing the function's arg type as a typemark,
readers need to be guided as to whether
a symbol is a {typemark, local param's name};
this can be done by using both
the paren'literal for "(this a function decl)
and the dot-prefix on the symbol
for "(this is a typemark):
eg, f(.typemark).returntype .
conclusion:
. the original (.(x) BODY) syntax
is modeled after ( var.type )
but puts the .(x) at the enclosure's opening
to show that it's typing the enclosure itself,
and not something the enclosure is qualifying;
it says:
this enclosure is a function of this arg, x,
eval'able when x is instantiated .

. the idea of typed enclosures representing sets
comes from:
# set-builder notation:
variously syntaxed as {x: body}, or {x|body}
either of which is generally ambiguous
given popular meanings for colon and div .
# types as sets of values:
to type a var without init'ing it
is to say it currently represents
a set of values .

. reasons why that syntax can't be
(body).(x):
# parameters are intuitively expected
at the beginning, as is done in math:
{x: body}, or {x|body};
# (b).(x) is confusable with a type literal:
one expressing a function returning record .]

newspeak for Strongtalk

12.26: adda/arch/newspeak for strongtalk:

[12.29:
Bracha, a proponent of pluggable types,
was concerned that it weakened security
to rely on datatypes rather than
use oop`ducktyping everywhere .
. datatypes make several leaps of faith:
# the compiler has correctly analyzed
the program's compliance to type compatibility;
# the compiler's optimizations
still maintain this compliance;
# changes to the environment don't bedevil
assumptions required by this compliance .]

. Bracha, a proponent of Strongtalk
(smalltalk with pluggable types)
has moved on to Newspeak
but expects a pluggable typesystem
can be integrated later .
. Newspeak's most notable difference
seems to be capability-based security (cap's);
let's review what that does
compared to oop's ducktyping .

. oop's ducktyping calls turned
anything like f(x)
into x`type-mgt( operation:f, arg:x),
and x's type-mgt provides this service
for anyone who asks;
ie, if the current account can use x,
then any app running under that account
has permission to use x;
[12.29: whereas,
cap's are object-specific permissions:
an object accepts a call only if
the caller possesses a permission that
# specifies that object, and
# doesn't preclude the requested operation .
. a process starts out with no cap's
except those needed to remain functional:
it can accept arg's, return results,
and modify its own local mem allotment .
. other capabilities require
special permission provided by employers
(the user, admin, or os kernel).]

12.26: caller id:
. cap's are giving each app their own id,
so that cap'based calls would also involve
the caller id; [12.29:
well,
it includes the concept of caller id;
like so:
cap's are awarded to particular id's,
and they are non-transferable;
so, then cap's are essentially a tuple:
(caller id, allowed object, allowed operations) .]

12.27: adda`plans:
. cap's can be controlled by
the run-time supervisor
instead of the current object's type-mgt .
. after the user has set limits on each app;
these become part of process records
(owned by supervision's task mgt),
and all attempts by a process
to communicate with others,
becomes a function of the capabilities:
eg, instead of asking to access the file system,
a process says things like,
"( let me modify the portion of filesystem
pointed at by my process record's writableFiles cap' .)
"( let me read all files within
my process record's readableFiles cap' ).

2010-12-27

Gilad Bracha's pluggable types

adda/gilad`pluggable types
summary:
12.26: varieties of encapsulation?
Typical language semantics depend on the type system
(e.g., casts, overloading, accessibility).
By eliminating this dependency,
we can make our languages more modular.

Class-based encapsulation
(relies critically on complex and fragile
typecheckers and verifiers)
-- very popular, but inherently less secure than
object-based encapsulation
(enforced only by context-free grammar).
Typecheckers are tricky and often buggy;
whereas, parsers are well understood.
--. optional typing simplifies language design,
to the benefit of the system as a whole.

12.27: understanding the concept:
. how can types be optional in smalltalk
when in fact smalltalk has an inheritance hierarchy?
aren't oop classes acting as types?
. oop typing is by definition "(optional);
because, it depends only dynamic typechecking .
. manditory typing (vs optional) is when
the code won't compile until after
the static typechecking system has verified
(by analyzing declared types and compatible assignments)
that any future object can handle
any of the messages they may be sent;
whereas,
smaltalk's oop never assumes future knowledge;
rather, every function call consists of
sending the object the name of the message,
and then that object can verify for itself
-- before accepting the current call --
whether it can handle a message of that name .

. the main idea of pluggable types
is that while smalltalk oop's
dynamic typechecking is a great idea,
static analysis has additional value
-- both can coexist together .
. a lang's type declarations are optional
when the lang's compiler can generate code
regardless of whether
the type declarations exist .
. type declarations are pluggable when
when developer tools support multiple
static type declaration analyzers;
a test run consists of
first sending the code to the current analyzers;
if there are no warnings,
then the code is sent to the compiler .
. given some warnings, the developer can still
opt to compile the code anyway,
which will run until at least until caught by
any run-in's with dynamic typechecking .

12.11 ... 12.12: news.adda/gilad`pluggable types:
. I found pluggable types mentioned in
a comment on a forum where Mark S. Miller
explained why capabilities rule and ACLs drool:

"(. Gilad, in his Pluggable Types manifesto,
talks about serialization as part of his
justification for why the vm's execution
shouldn't depend on types . )
-- z-bo
"(... the software systems equivalent of
Padlipsky's The Elements of Networking Style...
demonstrating why, in the software industry,
everything we do today is moronic and monolithic.)

ppt for bracha`pluggable-types.pdf:
. static type is characteristically mandatory;
types should instead be optional;
a paradox of using type systems for
reliability and security
is that while they can
mechanically prove properties,
they are making things complex and brittle;
eg, type systems for vm and lang collide .

. persistence works best with structural typing;
whereas, nominal typing (declared by a type name )
forces the serialization
(a subsystem for persistent filing)
to separate objects from their behavior;
it can't tolerate a type's name being
associated with more than one impl' .
[12.27: well,
that's why there are class clusters,
where the type mgt declares a std format
for all member types to file their state as .]

12.12 ... 12.13: gilad's complete paper:
Gilad Is Right:
. the talk notes need backing by this paper:
pico.vub.ac.be/~wdmeuter/RDL04/papers/Bracha.pdf
(found here)
class-based encapsulation
vs object-based encapsulation ? ...
. Class based encapsulation.
class C
{ private secret.int
; public expose( c.C).int { return secret;}
} -- there is no way to know
when access to encapsulated members is allowed
without a (prohibitively costly) dynamic check.
Instead, use object-based encapsulation,
as in Smalltalk or Self .
[12.26:
. keep in mind he's talking about
merging dynamic with static typing;
object-based encapsulation? how about:
"Object-Oriented Encapsulation for Dynamically Typed Languages":
"( Encapsulation in object-oriented languages
has traditionally been based on static type systems.
As a consequence, dynamically-typed languages have
only limited support for encapsulation;
this paper is bringing encapsulation features
to dynamically typed languages.) ]

[12.27: the point of the example of
class-based encapsulation
was showing how it allowed for declaring
-- in the interface (vs class body) --
the body of an accessor
that accessed an instance var;
whereas proper interface-body separation
doesn't allow any assumptions about
what an interface entry does with privates .]

. using the assumptions of smalltalk oop,
a method can be accessed without knowing
the obj's type or class;
ie, if it doesn't support that method
then it simply returns nil,
or otherwise raise a not-supported.exception .

. overloading functions implicitly requires
nominal typing of function parameters;
not good ...
bewildered? a concrete example is strongtalk!
(originally at www.cs.ucsb.edu/projects/strongtalk)

12.26: complete text (found on his site):
Pluggable Type Systems Gilad Bracha October 17, 2004

12.26: Strongtalk`history:
Dave Griswold was frustrated with the fact that
there were still a lot of obstacles to using Smalltalk
in most kinds of production applications.
. it was still way too slow, [but simple]
had poor support for native user interfaces
(but portable)
and lacked a [static] type system,
which although it makes the language flexible,
also means large-scale software systems
are a lot harder to understand .
. by 1996 the system was transformed
by Urs Hölzle's speedy compilation technology,
and Gilad Bracha's type system for
programming-in-the-large;
but then the Java phenomenon happened;
years later, Sun finally open-sourced Strongtalk .

New repository location for Strongtalk 4 Oct 2010
new repository location on github
You will also find the first steps on a
Newspeak port on branch nsreboot.

strongtalk overview:
. the type system is not concerned with
improving execution performance,
since it is based on interface types.
Optimization requires concrete
implementation type information.

key characteristics of the Strongtalk type system are :
* allows natural Smalltalk idioms to be typechecked:
1. Separates the subtype and subclass lattices.
2. Includes parameterized types and classes.
3. Supports parametrically polymorphic messages,
with a flexible mechanism for
automatically inferring actual type parameters.
4. Supports both subtyping and type matching.
6. Provides facilities for dynamic typing.

5. Preserves the subtype relations between classes
defined by the Smalltalk metaclass hierarchy
and relates them to the types of their instances.
A protocol is a collection of message selectors
and their associated signatures;
Every class C automatically induces a protocol,
The type hierarchy is defined using protocols.

* The system provides the same kind of
reflective access to type information
as Smalltalk does to other language constructs.
Note that if a program does access
type information reflectively,
then by definition its behavior is dependent on
the type annotations in it.

Strongtalk supports two relations on types:
# Subtyping (substitutability):
an element of a subtype can be safely used
wherever an element of a supertype is expected.
# Matching (common pattern of self reference):
. Every protocol definition specifies its supertypes.
Likewise, a class definition can specify
the supertypes of its protocol.
By default, a protocol is declared to be
a subtype of its superprotocol.

2010-12-25

managing capabilities without encryption

12.7: adda/cstr/managing capabilities without encryption:
. in a read of capability-based security
I wondered if there was some way to
have capability enforcement
without having to encrypt all the shared pointers .
. related ideas include:
# Singularity's faster task switching
done by using ref's to shared mem'
instead of passing copies between modules;
# how to use pointers to shared resources
so that even though 2 concurrent sharers
were both active at different processors,
only one active sharer at a time
would have a useful link to the shared resource .
sketch of a possible design:
. a process can never reach out directly,
but is always accessing things via
pointers located in their header,
and only the supervisor can modify this header;
eg, the task scheduler .
. it's not enough to have possession of a pointer,
you've got to have a supervisor
copy it to your header;
so, it's like encryption,
in that it requires an authorization .
layers for when bugs happen:
. encrypted cap'pointers are being
another level of security; [12.25:
. cap'based is supposed to include
the soa idea:
. bugs are going to get into the system,
but in a software system that
connected it's modules
the same way https connects computers,
then it wouldn't matter bugs had invaded;
because each of the components is
being separately guarded by modularity
(not allowing direct access to state)
and is working only with ID'd clients
(not servicing anonymous agents).
. the idea of unencrypted header pointers
is assuming that
the system's runtime can be secured
which is not likely
on today's monolithic OS's .]

2010-12-23

icons of the alphabet

12.23: adds/icons of the alphabet:
. abc's look like the very basic mechanisms
whose names begin with such letters:
Angle(A is icon of angle arc between hinge plates)
Bisect(graphical counting or dividing),
Circumfrence, Diameter,
Extend (greek E means summation)
Front (arrow points at front surface of a drafting view
-- vs the top surface),
Gyration (revolving starts from right angle),
Hinge (H can be plates hinged),
Iatric(healed, parts assembled
-- I is a clocking of H)
Join(J has a hook for joining)
Kaleidoscope (K shows both V and V's mirror image)
Ligature (bonding of multiple indep'dimensions
-- symbolized by L, a right angle).
Multiply (M is a clocking of E, summation)
Not(N is the same shape as set`not: ~)
Oscillation (O is loop like cycles, oscillations),
Post-Oscillation or Product
(P is icon of dropping out of a loop)
Quality (what's under Oscillation? its basis or control)
Radical (Oscillation's side affects)
Specialization (S looks like yin-yang formations
-- specializing in complementary differences)
Top (arrow points at top surface of a drafting view)
Union (U looks like a collecting cup)
Vacillation (V is side view of wave, VVVV)
Wall (W is a counter-clocking of E, summation
-- stop additions)
Xiphoid (Greek xiphos: sword)
Yes (what's under Vacillation?
-- basis or control of V, energy)
Zero (Z is a clocking of N, not(this): not anything)

2010-12-17

culturomics

12.16: news.adds/culturomics:

Quantitative Analysis of Culture Using Millions of Digitized Books
"( We constructed a corpus of digitized texts
containing [5 million books]
about 4% of all books ever printed.

Analysis of this corpus enables us to investigate
cultural trends quantitatively.
We survey the vast terrain of "culturomics",
focusing on linguistic and cultural phenomena
that were reflected in the English language
between 1800 and 2000.
We show how this approach can provide insights about
fields as diverse as lexicography,
the evolution of grammar, collective memory,
the adoption of technology, the pursuit of
fame, censorship, and historical epidemiology.)
Google Tool Explores 'Genome' Of English Words

. To coincide with the publication of the Science paper,
Google's ngrams web app shows how often
a word or phrase has appeared over time
in its scanned literature .
Dr Jean-Baptiste Michel a psychologist in Harvard's
Program for Evolutionary Dynamics,
and Dr Erez Lieberman Aiden
have developed the search tool .

. 8,500 new words enter the English language every year
and the lexicon grew by 70% between 1950 and 2000.
But 52% these words do not appear in dictionaries.
– the majority of words used in English books .

2010-12-15

multi-interfaces

3.24: adda/multi-interfaces:
. just as oop classes have different interfaces for
clients vs subclassers,
adda needs a sublanguage for describing
multiple interfaces
and the various actors that can access them .
. dimensions for having separate faces include:
roles, security levels, priorities,
business arrangements (eg, demo.ware vs full-service),
and partner name (where the degree of allowed reuse
depends on which partner is doing the reusing ).

2010-12-14

pico puts some clothes on lisp!

12.13: lang"pico:

. what kind of academic site would drop their link
(pico.vub.ac.be/~wdmeuter/RDL04/papers/Bracha.pdf)
to an important paper like Gilad Bracha's
pluggable types! ? welcome anyway,
to Brussel's Vrije univ's pico project,
the marriage of Scheme and normal infix notation
for a lispy pascal !

Pico is the smallest but expressive language
for teaching computer concepts to
non-computer students (eg, Physics and Chemistry).
. it adapts Scheme's syntax (significantly)
and semantics (subtly):
* the semantics had to be simple
even if some Scheme features became inaccessible.
* the syntax had to be like that in math;
* ease-of-use, portability, interactivity...
must have priority over performance;
Pico features garbage-collected tables (i.e. arrays),
higher order functions, objects,
meta programming and reflection.
* Pico as a language had to coincide with
Pico as a tutoring system;
the boundaries between programming and learning
had to be totally removed.

Pico is no longer used as a tutoring language
but this is a consequence of mere politics;
[eg, mit moved from Scheme to Python
because Python too was a good teaching lang'
that was also useful (eg, for working with
the new robotics lib' mit adopted).]

Today, Pico is still used as a means to teach
principles of language design,
interpreters and virtual machines
in a sophomore course:
Theo D'Hondt's hll concepts:
. all software used in this course are built in Pico.
Even the virtual machine itself is built in Pico
(it is a so-called meta-circular implementation
using a previously created ANSI C Pico machine).

Theo D'Hondt's computational geometry course:
. it uses c and version of pico
( a graphical extension
with syntax modified to enhance readability).

Theo D'Hondt's Growing a Lang from the inside out:
Programming Language Engineering
is the assembly and mastery of
constructing and applying
programming language processors.
. we need to review research like continuations,
and critique the current attempts at concurrency .
. this series of lectures discusses the need for
language processor design to be extensible,
similar to Guy Steele's 1998 OOPSLA phrase
“Growing a Language”
refering to the need for an expressive core
that is easily extended .
We need to bridge the gap between
the abstract concerns addressed by the language
and the features offered by the hardware platform
-- keeping in mind software reuse .

12.14: beyond pico:
AmbientTalk is influenced by E.lang and pico;
it's intro'd here along with google's Go.lang:
Another evolving area of computing
concerns programs running on mobile devices
linked in "ad hoc" wireless networks.
AmbientTalk, an experimental language presented by
Tom Van Cutsem from Vrije Universiteit Brussel in Belgium,
explores a new paradigm called
"ambient-oriented programming,"
which departs from traditional distributed computing
in two main ways.
First, it does not rely on central infrastructure.
Second, it's smart enough to buffer messages
so that when the connection drops,
they're not lost, and when the connection is restored,
it sends the messages through as if nothing happened."

12.13: adda/double colon operator in pico:
pico'declarations:
Pi::3.1415
immutableFun(x)::x
. pico'declarations (name::value)
introduce immutable symbols (i.e. constants)
whereas pico'definitions
introduce assignable names [variables].
# Definition
eg, x:4, add(x,y):x+y, t[5]:10
a variable is defined, a function is created,
or a table is allocated and initialized.
# declaration
[defines the same things, but immutably];
eg, Pi::3.14, t[3]::void and f(x)::x.

2010-11-14

oop with frameworks

10.23: adda/oop/the 3 layers:
. Padlipsky, author of
The Elements of Networking Style:
"(If you know what you're doing,
three layers is enough...)

. oop'ing needs only 3 layers,
not endless inheritance ?
that reminds how I saw oop as simply
a layer under structured programming: [11.14:
the 1980's structured paradigm meant that
programs could create vocabularies with functions;
but they could create class types
like the native types that were provide
where there was the use of binary ideograms
and implicit coercions among related types .
. the types were provided,
then you built your vocabulary of functions
on top of those types .]
. at that time,
I had not yet thought about frameworks ...

. one place where many levels of
inheritance can be appreciated,
is within the system it was modeled after:
biological classifications of animals
involving inherited features or behaviors .

. I would say the 3rd layer of Padlipsky's oop
is the type cluster, just as type Number is
for the numeric subtypes {R,Q,N,Z,C};
notice the Number type.class is not abstract;
it has the code that handles binary operations, [11.14:
and diff's in operand subtype;
eg, you don't coerce an int to rational
if the rational's value happens to be integral .]
. objects can interact with each other based on
having context menu's which include
recognized submenu's .
. this is programming to interfaces,
and doesn't involve the impl'inheritance
that is usually associcated with oop .

. in summary, the 3 layers of oop are:
#1: a type having an interface to share;
#2: a type claiming to adopt a shared interface;
#3: an app that instantiates interface adopters .

. frameworks are a sort of
structured programming with generics;
you can think of generics as a
layer under stuctured code,
parallel with type.classes .
[11.14:
. a structure library
gives you a language of functions
which you may compose in various ways;
ie, the library is an employee,
and you're the exec .
. a framework library takes the exec' seat
letting you customize its decisions
with your own structure library .
. a datatype is like a structure library
but it's purpose is to control modifications
to the obj's it will create .]

the various uses of double-colon

 10.2: news.adda/dstr/:: as conversion in Axiom:
 The :: is used here to change from one kind of object
 (here, a rational number) to another (a floating-point number).
r::Float .
. the aldor.lang` tutorial shows
the same is true of aldor.lang  too .

10.13: news.adda/dstr/use of :: or misparsing javascript:

var anchors = $x('//h3/font/b[text()="[PDF]"]/parent::*/parent::*/a')
... I may be misparsing;
this could be separating places by a colon,
but the things being separated are colon-terminated tags .

10.14: news.adda/[perl`::]:
Perl's Win32::GuiTest module
10.18: mathforum.org:
. it is not a bad thing to have your module name
include two colons, as in the name
Text::Wrap.
When this module is installed,
it will be placed in a directory named Text
under the root library directory.
The module code itself will be in a file called Wrap.pm .
This helps keep the library directory more organized.
In addition, the :: naming convention
can also indicate class hierarchies,
although it does not have to.
11.14: keyword double-colon:

. it's the scope resolution operator
in php and c++, eg, class::function
. c++'s default class is the global space:
::global -- found outside any function block .
. that way the function's block can shadow a global's name,
and then still access that global .

. the php`err.msg's call "(::) a Paamayim Nekudotayim,
(Hebrew for double colon).

gnu`make`::
Double-colon rules are explicit rules
handled differently from ordinary rules
when the same target appears in
more than one rule.
Pattern rules with double-colons
have an entirely different meaning:
if the rules are of type double-colon,
each of them is independent of the others.
. the cases where double-colon rules really make sense
are those where the order of executing the recipes
would not matter.
. they provide a mechanism for the rare cases
in which the method used to update a target
differs depending on which prerequisite files
caused the update .
HP OpenVMS`DECset`DIGITAL Module Management System`
. "(::) means additionally_depends_on .

ipv6 email syntax:
. it can have 8 words (16bit values)
separated by colons;
a double colon means missing values,
assumed to be zero .
the bible of SMTP, RFC 5321:
“The “::” represents at least 2 zero words.
the bible of IPv6, RFC 4291:
“The use of “::” indicates one or more zero words .”
RFC 5952 :
The symbol “::” MUST NOT be used for just one zero word .

"(::) is a contraction over four indices,
-- like when a colon means
a tensor contraction involving 2 indices .

a font for an audiovisual within dialog:
. everything was fine ::click:: . oh .

universal mvc`controller for mac

10.7: adda/automator built-in:
. noticing that mac.automator's library
had no 3rd-party app's,
I realized that app scriptability
was like the gui automation:
it should be baked right in .
. most app's can be engineered to be
bot-friendly first, and then wrapped in a gui
that is human-friendly;
the app maker doesn't have to deal with
graphics at all, just basic data structures
that the system is known to have graphics for .
. this is the way to pervasive app scriptability;
I'm wondering what it is about mac app's
that prevents the system
from seeing and using operations bots can reuse .
. it might have to do with
app developers' privacy and control issues .

roll-your-own heap systems

10.5: news.adda/app'based heaps more vulnerable:

Adobe Reader's Custom Memory Management: a Heap of Trouble
Research and Analysis: Haifei Li
Contributor and Editor: Guillaume Lovet
 a PDF-specific exploitation research
 focusing on the custom heap management on Adobe Reader.
summary:

. I filed this not under adda
because that is my C code generator,
the basis of addx,
whose design includes rolling its own
heap mgt system .
. this paper was pointing out that
the modern OS has a secure heap due to
randomization -- a feature I hadn't planned
to incorporate into the addx mem system .
"(performance sometimes
being the enemy of security,
this custom heap management system
makes it significantly easier
to exploit heap corruption flaws
in a solid and reliable way.
Coupled with the recent developments in
DEP(data execution prevention) protection bypass,
this makes heap corruption exploitation
possible across very many setups )
. if malware can overwrite your heap,
then it helps to have a hardened heap
(like what would typically be provided
by your platform's os).
. questions I had were
how are these happening ?
here's their skeletal answer:
"( Heap corruption flaws are initiated by
(Heap overflow, use after free,
integer overflow, etc...);
the two main ways to exploit these flaws:
# overwrite some app-provided data in the heap;
# corrupting Heap mgt system's internals
(eg: block headers, etc...)
so as to make the system itself
overwrite "interesting" data;
for instance, during blocks unlinking operations,
where several pointers are updated.)
. how is malware there in the first place
waiting for a priv'escalation?
. if you prevented all these:
(Heap overflow, use after free,
integer overflow);
what else is there?
is it that functions don't check inputs?
"(many PDF vulnerabilities out there
are structure-based ones, (i.e. file format))
or can malware jump in the middle of your code?
aren't you cooked already?
[11.14:
. their crashing data becomes code
because it's overwriting your code:
the heap pointers you use
for going to your own code .]

programming in-the-large with c

8.5: news.adda/translate/info'hiding n type-safe linking:

2007 CMod Modular Information Hiding
and Type Safe Linking for C

CMod, provides a sound module system for C;
it works by enforcing a set of rules that are
based on principles of modular reasoning
and on current programming practice.

CMod's rules safely enforce the convention
that .h header files are module interfaces
and .c source files are module implementations.
Although this convention is well known,
developing CMod's rules revealed many subtleties
to applying the basic pattern correctly.
CMod's rules have been proven formally
to enforce both information hiding
and type-safe linking.

2010-11-12

a doctor with good bed-side manners

11.6: adde/a doctor with good bed-side manners:

. a recent discussion about hypervisors
was discussing sophisticated ways to handle
extreme but transient mem'space shortages .
when multitasking several app's,
. the main approach should be
good bed-side manners:

. a study of why doctors get sued discovered
that, of the doctors who were sued the least,
what they had in common was
having good bed-side manners
(I would imagine that this would include
being honest with self and patient
with what they can expect,
and what their other options are).
. for a computer OS, that means
when a memory shortage comes up,
the OS is:
# making time to dialog with users:
(explaining why this app is bogged down
by the extent of its workload,
not getting the mem or cpu it needs,
what library components it's using
-- details that can be provided
only when the system is managed:
ie, the algorithm is compiled in a way
that keeps the os in charge,
and in the know).

# always keeping up with all user input:
(ie, the gui is on the highest priority thread;
the ability to take and display user input
is never frozen, and the input is always backed
by the os itself, a buggy app can't lose it .)
--
. this kind of control can get expensive,
and combining it with high-performance app's
might require networking 2 boxes or cores:
one for the user`interface,
and the other for app's to stay on task;
[11.9: but,
if 2 boxes are not available,
adda's translation should be providing
true multitasking by embedding into app's
frequent calls to the gui coroutine,
which then gives the os a chance to
stay in touch with the user .]

loop`count attribute

11.9: adda/cstr/loop`count attribute:
. this is a language add-on
that can simplify control var's:
instead of having to
declare and init' a loop var in the head,
and then increment it in the body,
now,
every loop understands "(loop`count),
or "(LOOPLABEL`count) .
. labels are good for nested loops:
when you're in the inner one,
and you want the count of the outer one,
you'll then need a label for the outer one .

timer controls

11.9: adda/co/timer controls:
. the addx app has timers that can
count elapsed time, or addx cycles:
ie, how many times has addx been called
to handle user interaction and concurrencies
-- that should be happening every 10..1000 cpu cycles?
. the ideal counter could tell an app
how much cpu time it's been given;
but,
. addx will usually be running as an app
on a multitasking system,
so it may not be easy to get addx`time
(ie, the amount of cpu time addx has received).
[11.11:
. if the platform can't provide a process`time
and addx is getting task-switched a lot,
then getting addx`time could be expensive:
. it can approximate the {suspend, resume}-times
by tracking the clock at a rate that is
significantly faster than it's being multitasked .]
uses:
. timers can be used in a loop`head
for if the algorithm is not sure to converge,
the timer can help it exit anyway .
. if a loop exits,
and its countdown timer is zero
then control knows its loop can't find
a converging solution .
. the loop can also use (~timer? raise failed),
or some other form of reporting .

rapid compile times

11.10: adda/translation/rapid compile times:
. during r&d people want quick compiles
(that's what inspired google to create Go.lang);
whereas, one problem with adda
is that in making the lang
easy for the user, and feature-rich,
its compiler might take a long time
trying to figure out what the user wants .
[11.12:
. one remedy for this is integration with adde,
which calls the compiler to attempt tentative translations,
and compiling in sections,
so that if a routine compiles,
and just one line is modified,
only that one line needs to be recompiled .]
. adda can also speed compiles by skipping c,
and translating to addm (the virtual machine).
. addm can do all the safety checks as needed,
rather than at compile-time .

handling multi-inheritance interface clashes

11.10: adda/oop/multi-inherit:
. suppose 2 interfaces have the same subprogram name?
you can qualify it with type ownership,
eg, aTYPE`f;
or it can be implied by the call's other items:
it's the intersection of the types associated with
the subprogram, and all its operands;
eg, subprog"add is in types {number, string, multi-set}
int is a number,
beast is either in {number, behaviors};
so the type-intersection of (2 + beast)
is number, hence it eval's to (2+616) .

library mgt

11.9: adda/lib mgt/multiple lib's:

. a lib is an obj file that does lib mgt;
you declare it in a folder,
and then anything compiled from within that folder
is registered in and by that lib,
if a project folder doesn't have a lib,
the compiler searches up the folder hierarchy,
and beyond the acct's db:
the node's lib, intranet's lib,
and finally the std, built into the compiler .
. at each level, a diff shows
how your lib is a modification of
the lib it inherits from .

. lib's can easily share items with other lib's:
their items are just links to the acct's db,
modified entries simply switch the link
from targeting the share
to that of a private lib .

. adda assumes you want a lib
placed in the top of your acct`fs (your home dir)
-- with no localized lib's,
and all compilation units shared .

. if you want to make sure you don't
link to anything except locals;
create a new empty lib in your project folder,
and inherit null .

. you can inherit from another lib
either by copy or link:
(linking to the lib means your lib
grows with that lib,
copying means you copy all the current items)
. various ways of partial inheritance:
# undo some of an inheritance
# inherit specific components of a lib .

foreign lib heads-up wrapper:
. program's compiled by adda are
user-friendly because all code is
sprinkled with embedded calls to gui mgt
for ensured responsiveness .
. since adda language is translated to ansi c,
it can link to foreign lib's
(ones that adda didn't compile).
. adda puts a wrapper around
calls to foreign code,
reminding  the user that
some potentially intrusive code
is about to run,
the reminder would have some option checkboxes:
# ok to always proceed without warning?
# for any foreign lib?, whitelist... warnlist... .
# want a dashboard light indicating
when the system is running foreign code? .

. if there's some way to know
when the user is annoyed,
and it may be due to the system being unresponsive,
that will never happen when addx is running adda code,
so then remind them it's because
a given foreign function is not releasing control .

. one way to insure more integration
is by being able to insert gui-relief calls
at the c code level,
rather than in adda code .
11.12:
. but that still leaves code that can't be trusted;
it is impossible to convert arbitrary c
to safe c, in every case,
because the arbitrary c wants to do things
that safe c would never do!
but, adda should at least attempt it,
at least in some advanced version;
otherwise,
. adda should tell the user
when a foreign lib can't be coverted by adda,
and explain why adda generates c code,
rather than reusing c code:
it's more trustworthy even if not more efficient;
because, it's sure not to crash the app,
and not have vulnerabilities that could be
used by malware seeking to smash the stack or heap .
11.11:
. when users add a foreign lib to their acct,
they should be reminded of the risks,
and should identify the source
with a {project name, url, author name}
so that if there are problems with it,
addx can meaningfully show users
which lib is involved .

exception mgt

11.9: adda/exception mgt/intro:
. we expect some errors;
because, we'd like to
use some software before having to
formally verify its correctness,
so, we compensate
by finding ways to limit damage:

# isolation:
. modules should never crash other modules:
we can verify a microvisor or microkernel,
and it enforces the firewalls between processes .
. addx should be such a microkernel design;
[11.12: just as the app doesn't have
direct access to the user (it's the mvc`model
while adde is the view&controller )
it also has no direct access to the machine:
all c-level activity is handled by addx|addm .]

# regularly yielding to the gui coroutine:
. a critical priority of addx
is never letting the gui be unresponsive;
this is done by having adda embedding calls to addx
at every place in a subrogram that could
mean more than a milisec of cpu time:
# inside every function call,
# inside every loop .
. this is cooperative multi-tasking with
system-enforced coroutine yielding
for paravirtualizing true multi-tasking .

# regular reporting to supervisor:
. getting a regular progress report
will limit waiting on the down employee .
. employees can be waiting on
down service providers;
loops can be waiting on a terminal condition
that never arrives;
as when there's unexpected input;
mutual recursions (eg, f->g->h->f)
are another form of looping;
deadlocks can cause indefinite waiting,
but those can be avoided by
proper use of concurrency,
and by not locking resources
(using queues instead).
. a subprogram can be waiting because of
a failure to establish dialog with the user
(eg, if the system allows a window to be
always on top,
then dialog windows may be hidden from view).
. all threads are periodically tested for
whether they're waiting for a job offer,
for a {service, resource},
or are performing a service;
in which case,
addx measures how long ago the thread
produced its latest progress report .
. the progress reports read like a narrative
that a human would appreciate;
to make these reports quick,
msg's are encoded, so that each msg
is represented by some value in a byte (0...255)
(or a 16-bit integer if this subprogram had
more than 256 msg's).

# blackbox recorder:
. it tells us what was going on
just before a crash .
. it saves the progress reports,
and the recent call chain .

# a debug mode:
. when users' tools are not working well,
some users may like to see what's wrong
by using developers' visualization tools;
eg,
a freezing problem can be illustrated
as a looping call chain:
. the tool collects all the recent calls
(saved by the blackbox recorder)
then tries to find a recursion,
and shows the list of calls that are repeated .

11.8: sci.adda/exception mgt/reporting#loop count approximations:
[11.11: obs:
. it would be better to use progress reporting .]
. if adda can't statically figure out
when a loop won't terminate,
it should remind coders to give
approximate loop counts;
then the run-time can use that how?
sci:
. before the loop entry,
adda sets the counter, and a limit,
then has the scheduler check the limit .
. scheduler doesn't have to check all the time;
but if 2sec's go by,
nearly all loops are done by then,
if there is no limit given,
it starts giving it more cpu time if it can,
so the real time is compressed,
if another sec goes by
and there's other jobs to do,
then it starts getting starved ?
it could be doing a huge job!

eiffel vs ada

11.10: adda/lang/eiffel vs ada
Tucker Taft:
. pre- and post-conditions can be
fairly easily and efficiently included in Ada code.
Invariants seem difficult to emulate directly in Ada.
If you're really interested in the
formal use of assertions with Ada,
maybe Anna is a solution for you.

. although I like the assertion stuff in Eiffel,
I think the language has a number of "inelegant" aspects.
For example:
# exception handlers only at
the top level of a routine,
with the only way to "handle" an exception
being by retrying the whole routine.
# No way to return from a routine in the middle.
This makes it a pain in the neck
to search through a list for something in a loop,
and then return immediately
when you find what you want.
(I have never found the addition of
extra boolean control variable
a help to the understanding of an algorithm.)
# Namespace control
handled by a separate sublanguage,
and no real higher level concept of
"module" or "subsystem."
An obscure notation like "!!" (construction).
being used for an important and frequent operation;
# No way to conveniently "use" another abstraction
without inheriting from it.
# No strong distinctions between
integer types used for array indexing.
# Using the same operator ":="
for both (aliasing) pointer assignment,
and for value assignment,
depending on whether the type is "expanded."
(Simula's solution was far preferable, IMHO).
-- And most critically:
# No separate interface for an abstraction.
You can view an interface by running a tool,
but this misses completely the importance of
having a physical module that represents the interface,
and acts as a contract between
the specifier or user of an abstraction
and its implementor.
In Eiffel, one might not even be truly aware
when one is changing the interface to an abstraction,
because there is no particular physical separation
between interface and implementation.
I consider many of the above problems quite serious,
with some of them being real throwbacks
to the old style of programming languages
where there were no well defined
interfaces or modules.
Hence,
I cringe a bit when people say
that Eiffel is the "most elegant" OOP
and that they would use it
if only it were practical to do so.
In many ways,
I think Ada is much better human-engineered than Eiffel,
with important things like range constraints
built into the language
in a way that makes them convenient to use.
Although general assertions are nice,
they don't give you the kind of
line-by-line consistency checks
that Ada can give you.
To summarize --
Although Eiffel certainly has a number of nice features,
I don't consider it ready for prime time
as far as building and maintaining large systems
with large numbers of programmers.
And from a human engineering point of view,
I think Ada is significantly better.
jhc0033:
Eiffel targets a largely similar audience of
"correctness-oriented" programmers that Ada does.
However, it took some digging around
(no introductions to the language mention it)
to discover that Eiffel has a gap in its type system.
Guess what, type theory is a branch of math,
and OOP is a spiritual following.
I know what takes precedence in my book.
The Eiffel community's attitude is basically:
"we'll just pretend 2+2=5 because we can
use it to justify some teachings".
Ludovic Brenta:
I evaluated Eiffel too when I read Bertrand Meyer's
Object-Oriented Software Construction book.
The two things I dislike the most about Eiffel
are the lack of range constraints on numeric types
and the fact that almost all contract checks
are deferred to run-time.
Helmut
- Ada has a built in concurrency model (like java)
and Eiffel does not.
In Eiffel there is SCOOP
(simple concurrent object oriented programming)
which tries to integrate concurrency into the language.
But there is not yet any Eiffel compiler available
which implementes SCOOP.
- Eiffel has "Design by Contract"
which is a very powerful mechanism to get your SW right.
Using assertions in your code appropriately,
you are able to catch a bug much closer to it's origin
than without DbC.
DbC opens up the road for a verifying compiler
(i.e. a compiler which can at compile time
verify if contracts are not broken).
I don't understand why the promotors of Eiffel
haven't made the language more complete
in terms of standardization,
standard libraries and concurrency.

Pascal Obry:
Ada even supports a Ravenscar profile
(where no dead-lock can occurs)
usable for high-critical systems.
1995 oop: programing by extension and dynamic dispatching
2005 oop: protected/tasking interface that can be inherited.
helmut:

The ECMA Eiffel language specification is a well written document
(it is a language specification document
and not a document for the Eiffel user).
It lacks only in the area of
void safety, initialization
and covariant redefinitions (catcalls).
The problem:
It has not been updated since june 2006.
I.e. it reflects a status which has never been implemented
(and will probably never be implemented,
not even by EiffelStudio).
Eiffel`author Bertrand Meyer provoked by Ada.

(Yannick Duchêne) a écrit :
I talk to him (back in 1990 or 1991) one time:
the Eiffel inventor hates Ada .
Adam:
. It appears that Meyer has some criticisms of Ada,
but I couldn't find anything that would indicate
that he hates it.
His Eiffel site has an article about the Ariane 5 crash,
but he said there that
the language couldn't really be blamed
because the problem could have been caught
using Ada's exception mechanism
if the programmers had used it properly.

2010-09-14

anti-communitarianisms by Niki Raapana

I found Raapana's youtube vid's
through her acl site
and in case anyone would like to avoid being
dragged slow-motion through the hate & porn
I transcribed them here
so you can study some
anti-communitarian propaganda
from a safe distance .

2010-08-31

the MathLink ABI

adda/ABI"mathlink

8.12: news.adda/mathematica`mathlink:
. MathLink is the same as the
binary version of adda:
it's the communication protocol between
the kernel that provides the app,
and the notebook that is the user's agent;
. any program that adopts this protocol,
can communicate with any other app,
and of course, the user's agent .
. adopters can communicate as
client, server, or peer-to-peer .
. it makes accessable any hardware resources
having a c interface .
. because it's the connection between
the user's shell and the engine,
you're assured that its API is complete:
anything you can see as a user,
can also be seen by an
app that has embedded Mathematica .
. conversely, a custom user's shell
can have integrated access to both
Mathematica and addx .
. Wolfram already has a customized
user's shell, .NET/Link,
that integrates microsoft's .NET
. this is a complete integration,
extending the Mathematica language
with all existing and future .NET types
(which include all library calls)
allowing the same immediate run mode,
for RAD programming of .NET
and Mathematica extensions .
-- and it's openware!

. its protocol includes internet,
and includes data integrity .
. if the binary protocol is not usable,
it can also revert to html or xml .
. as the binary version of a
full-featured language,
in can express "(out-of-band data,
such as exceptions).
. having this network access
means it can support a
Parallel Computing Toolkit
and act like the X protocol,
running on a server, while
viewed from a laptop or tablet;

. in addition to being an ipc protocol
for extending and embedding Mathematica,
Mathlink is a reference to some
pre-built integrations with some other
popular app's like Excel,
where it can a either extend or replace
that spreadsheet's macro language .

. the C/C++ MathLink Software Developer Kit (SDK)
ships with every version of Mathematica .

massively parallel computing for the masses!
. the Mathematica cloud computing service
is a collaborative effort of
Wolfram Research's parallel programming API,
Nimbis Services's job routing,
and R Systems NA's supercomputer time .
. it assists parallel programming
by providing an integrated
technical computing platform,
enabling computation, visualization,
and data access.
(2008.12: Mathematica 7 features concurrency primitives:
Wolfram's new Parallelize, ParallelTry
and ParallelEvaluate functions
provide automatic and concurrent
expression evaluation.
Parallel performance can be tweaked and queued using
the ParallelMap, ParallelCombine, ParallelSubmit,
WaitAll and WaitNext functions.
These and many other parallel computing functions
ensure that developers have tremendously granular control
over what will be sent through the parallel pipeline
and exactly how that data will be processed.
. this concurrency can also be fully utilized
with Wolfram's gridMathematica
and upcoming CloudMathematica add-ons .)
. Nimbis Services, Inc., is a clearing-house,
providing business users an easy to use
menu of hpc services,
including TOP500 supercomputers
and the Amazon Elastic Compute Cloud,
all in one "instant" storefront.
. Nimbis Services will enable access to
R Systems NA, Inc.,
whose R Smarr cluster was the
44th fastest supercomputing system
on the TOP500 list in 2008 .
. R Systems has exceptionally large memory
in multi-core HPC resources
with a double-data and quad-data rate
InfiniBand network .
R Systems not yet accessable:
. only Amazon EC2 configurations
are currently operational.

8.14: adda/ABI/versioning:
. an abi (app'binary interface)
may need to be revised;
as an extension to the addm abi:
(type, subtype, function, args)
there needs to be an identity number
that is established by handshake
which is detected by the msg being
sent from the zero identity .
. identities are retained for the duration of
a connection session;
. the handshake is coded in the original ABI version,
and the parties can then haggle about
what the remaining session will be coded in .

2010-08-30

lang's for supercomputer concurrency

adda/concurrency/lang's for supercomputer concurrency
7.27:
. the DARPA`HPCS program
(High Productivity Computing Systems)
is meant to tame the costs of HPC
(High Performance Computing)
-- HPC is the use of supercomputers for
simulations to solve problems in
chemistry, physics, ecology, sociology,
and esp'ly warfare .
. programmer productivity
means making it easier to develope
code that can make full use of
a supercomputer's concurrency .
. the main source of the cost
is a lack of smart software tools
that can turn science experts
into computer coders .
. toward this end, they are funding
the design of new concurrency lang's .
7.28:
. the DoD and DARPA already made
quite an investment in the
Ada concurrency lang',
a language designed for expressing
the sort of concurrency needed by
embedded system engineers; 7.31:
but, the software developer community
spurned Ada as soon as there was
something better ...
. pascal was popular before '85,
and when Ada arrived '83,
it was popular too;
then came the Visual version
of Basic (familiar and slick).
. the top demanded langs by year were:
'85: C, lisp, Ada, basic;
'95: v.basic, c++, C, lisp, Ada;
'05: Java, C, c++, perl, php,

. the only currently popular lang's
meant for exploiting multi-core cpu's
if not other forms of concurrency, are:
(3.0% rating) obj'c 2.0 (with blocks)
(0.6% rating) Go
(0.4% rating) Ada
7.29:
. whether or not Ada's concurrency model
is well-suited for supercomputers
as well as embedded systems,
it is not increasing coder'productivity .
. while Ada boosted productivity beyond
that offered by C,
it was nevertheless proven to do
less for productivity than Haskell
.

HPC Productivity 2004/Kepner{ 2003(pdf), 2004(pdf) }:
. lang's were compared for
expressiveness vs performance:
. the goal of a high-performance lang'
is to have the expressiveness
of Matlab and Python,
with the performance of VHDL
(VHDL is a version of Ada for ASICs ).
. UPC (Unified Parallel C) and Co-array Fortran
are half way to high-productivity
merely by using PGAS
(Partitioned global address space)
rather than MPI
(Message Passing Interface).
. the older tool set: C, MPI, and openMP
is both slow and difficult to use .

. the 2 lang's that DARPA is banking on now
are Cray`Chapel, and IBM`X10 Java .
. they were also funding Sun`Fortress
until 2006,
which features a syntax like advanced math
-- the sort of greek that physics experts
are expected to appreciate .

LLVM concurrency representations

Mac OS X 10.6 (and later):
. The OpenCL GPGPU implementation is built on
Clang and LLVM compiler technology.
This requires parsing an extended dialect of C at runtime
and JIT compiling it to run on the
CPU, GPU, or both at the same time.
OpenCL (Open Computing Language)
is a framework for writing programs
that execute across heterogeneous platforms
consisting of CPUs, GPUs, and other processors.
OpenCL includes a language (based on C99)
for writing kernels (functions that execute on OpenCL devices),
plus APIs that are used to define and then control the platforms.
Open for Improvement:
. With features like OpenCL and Grand Central Dispatch,
Snow Leopard will be better equipped
to manage parallelism across processors
and push optimized code to the GPU's cores,
as described in WWDC 2008: New in Mac OS X Snow Leopard.
However, in order for the OS to
efficiently schedule parallel tasks,
the code needs to be explicitly optimized
for for parallelism by the compiler.
. LLVM will be a key tool in prepping code for
high performance scheduling.
LLVM-CHiMPS (pdf)
LLVM for the CHiMPS 
(Compiling hll to Massively Pipelined System)
National Center for Supercomputing Applications/
Reconfigurable Systems Summer Institute July 8, 2008/
Compilation Environment for FPGAs:
. Using LLVM Compiler Infrastructure and
CHiMPS Computational Model
. A computational model and architecture for
FPGA computing by Xilinx, Inc.
- Standard software development model (ANSI C)
Trade performance for convenience
- Virtualized hardware architecture
CHiMPS Target Language (CTL) instructions
- Cycle accurate simulator
- Runs on BEE2
Implementation of high level representations:

# Limitations in optimization
- CTL code is generated at compile time
No optimization by LLVM for a source code in which no
such expressions can be optimized at compile time
- LLVM does not have a chance to dynamically optimize
the source code at run time
- LLVM is not almighty
Floating point math is still difficult to LLVM
Cray Opteron Compiler: Brief History of Time (pdf)
Cray has a long tradition of high performance compilers
Vectorization
Parallelization
Code transformation
...
Began internal investigation leveraging LLVM
Decided to move forward with Cray X86 compiler
First release December 2008

Fully optimized and integrated into the compiler
No preprocessor involved
Target the network appropriately:
.  GASNet with Portals . DMAPP with Gemini & Aries .
Why a Cray X86 Compiler?
Standard conforming languages and programming models
Fortran 2003
UPC & CoArray Fortran
. Ability and motivation to provide
high-quality support for
custom Cray network hardware
. Cray technology focused on scientific applications
Takes advantage of Cray’s extensive knowledge of
automatic vectorization and
automatic shared memory parallelization
Supplements, rather than replaces, the available compiler choices

. cray has added parallelization and fortran support .
. ported to cray x2 .
. generating code for upc and caf (pgas langs) .
. supports openmp 2.0 std and nesting .

. Cray compiler supports a full and growing set of
directives and pragmas:
!dir$ concurrent
!dir$ ivdep
!dir$ interchange
!dir$ unroll
!dir$ loop_info [max_trips] [cache_na] ... Many more
!dir$ blockable
man directives
man loop_info
weaknesses:
Tuned Performance
Vectorization
Non-temporal caching
Blocking
Many end-cases
Scheduling, Spilling
No C++, Very young X86 compiler
future:
optimized PGAS -- requires Gemini network for speed
Improved Vectorization
Automatic Parallelization:
. Modernized version of Cray X1 streaming capability
. Interacts with OMP directives
[OpenMP -- Multi-Processing]

DCT (discrete control theory) for avoiding deadlock

[8.30:
. exciting claims I haven't researched yet ...]
8.5: news.adda/concurrency/dct/Gadara`Discrete Control Theory:

Eliminating Concurrency Bugs with Control Engineering (pdf)
Concurrent programming is notoriously difficult
and is becoming increasingly prevalent
as multicore hardware compels performance-conscious developers
to parallelize software.
If we cannot enable the average programmer
to write correct and efficient parallel software
at reasonable cost,
the computer industry's rate of value creation
may decline substantially.
Our research addresses the
challenges of concurrent programming
by leveraging control engineering,
a body of technique that can
constrain the behavior of complex systems,
prevent runtime failures,
and relieve human designers and operators
of onerous responsibilities.
In past decades,
control theory made industrial processes
-- complex and potentially dangerous --
safe and manageable
and relieved human operators
of tedious and error-prone chores.
Today, Discrete Control Theory promises
similar benefits for concurrent software.
This talk describes an application of the
control engineering paradigm to concurrent software:
Gadara, which uses Discrete Control Theory
to eliminate deadlocks in
shared-memory multithreaded software.

promise pipelining

8.21: news.adda/co/promises/wiki brings understanding:
. yahoo!, this wiki page finally made sense of promises
as exemplified by e-lang's tutorial
which graphically showed things incorrectly;
so that unless you ignored the diagram
you couldn't possibly make sense of the tutorial .
[8.30: ### the following is just my
version of that page, not a working tutorial ###]

t1 := x`a();
t2 := y`b();
t3 := t1`c(t2);
. "( x`a() ) means to send the message a()
asynchronously to x.
If, x, y, t1, and t2
are all located on the same remote machine,
a pipelined implementation can compute t3 with
one round-trip instead of three.
[. the original diagram showed all involved objects
existing on the client's (caller's) node,
not the remote server's;
so, you'd have to be left wondering
how is the claimed pipelining
possible for t1`c(t2)
if the temp's t1, and t2
are back at the caller's?! ]
Because all three messages are destined for
objects which are on the same remote machine,
only one request need be sent
and only one response
need be received containing the result.
. the actual message looks like:
do (remote`x`a) and save as t1;
do (remote`y`b) and save as t2;
do (t1`c(t2)) using previous saves;
and send it back .
Promise pipelining should be distinguished from
parallel asynchronous message passing.
In a system supporting parallel message passing
but not pipelining,
the messages x`a() and y`b()
in the above example could proceed in parallel,
but the send of t1`c(t2) would have to wait until
both t1 and t2 had been received,
even when x, y, t1, and t2 are on the same remote machine.
. Promise pipelining vs
pipelined message processing:
. in Actor systems,
it is possible for an actor to begin a message
before having completed
processing of the previous message.
[. this is the usual behavior for Ada tasks;
tasks are very general, and the designer of one
can make a task that does nothing more than
collect and sort all the messages that get queued;
and then even when it accepts a job,
it can subsequently requeue it .]