addm/sec/securing the virtual machine:
4.20: 12.20: summary:
. a virtual machine can provide security
along with some user freedom to develope code
by using a microkernel architecture
and learning from the Chrome browser team:
you can't build a secure app unless you
secure the underlying OS and firmware too .
Showing posts with label addm. Show all posts
Showing posts with label addm. Show all posts
2014-12-20
2013-12-11
multi-language to univeral data format (UDF)
addx/multi-lang like .net:
10.2: 12.11:
. the addx system needs to be like
Microsoft's ".NET" such that it lets us
command addx with a variety of languages
-- adda would just be the default language .
. a new language being ported to addx
would provide a compiler that emits
addm's high-level code (HLC)
that is similar to .NET's assemblies .
10.2: 12.11:
. the addx system needs to be like
Microsoft's ".NET" such that it lets us
command addx with a variety of languages
-- adda would just be the default language .
. a new language being ported to addx
would provide a compiler that emits
addm's high-level code (HLC)
that is similar to .NET's assemblies .
2013-07-31
reviewing the costs of SOA
31: addm/reviewing the costs of soa:
. when looking at SOA architecture,
how does that affect the
cross-module communications costs ?
there could be a devil in the details;
but, the bird's eye view is that
it could actually be quite minimal .
. when looking at SOA architecture,
how does that affect the
cross-module communications costs ?
there could be a devil in the details;
but, the bird's eye view is that
it could actually be quite minimal .
2013-04-29
extending the const-var architecture
3.31: addm/extending the const-var architecture:
. what I'm calling the constant-var architecture
makes use of hardware isolation mechanisms
by safely dividing the system into
constant code, and variable data .
. there is another segment to consider:
# write-once-read-many:
. what's variable across process instances
can be constant during execution .
# special permissions:
. the implied permission is that the
data in the process belongs to that process .
. the activation record's resource display
is an example of special permissions:
the process has permission only to
read the resource display, not write to it;
but the supervisor can modify it .
. what I'm calling the constant-var architecture
makes use of hardware isolation mechanisms
by safely dividing the system into
constant code, and variable data .
. there is another segment to consider:
# write-once-read-many:
. what's variable across process instances
can be constant during execution .
# special permissions:
. the implied permission is that the
data in the process belongs to that process .
. the activation record's resource display
is an example of special permissions:
the process has permission only to
read the resource display, not write to it;
but the supervisor can modify it .
Labels:
addm,
architecture,
capabilities,
modules
2013-03-31
combined hardware-virtual isolation
addm/security/combined hardware-virtual isolation:
2.9: 3.31: intro:
. certain attributes of data are essential to security;
eg, by retagging arbitrary data so that it is
usable as a pointer to code,
we can treat malware data as instructions to follow .
. generally all data can be tagged
just as it is done by xml .
. there are 2 possible ways to enforce
process isolation and ROM attributes:
# HW (hardware) mem'mgt,
# VM (virtual machine) mem'mgt .
. hardware mem'mgt can enforce VM mem'mgt:
the VM's run-time exec never needs to change;
so, HW mem'mgt can see that code as const;
also, any file that the VM is trying to interpret
can be treated by the HW mem'mgt as
something that only the VM process can modify .
. finally, the VM has its own process space
and this should keep other processes
from corrupting its work space .
2.9: 3.31: intro:
. certain attributes of data are essential to security;
eg, by retagging arbitrary data so that it is
usable as a pointer to code,
we can treat malware data as instructions to follow .
. generally all data can be tagged
just as it is done by xml .
. there are 2 possible ways to enforce
process isolation and ROM attributes:
# HW (hardware) mem'mgt,
# VM (virtual machine) mem'mgt .
. hardware mem'mgt can enforce VM mem'mgt:
the VM's run-time exec never needs to change;
so, HW mem'mgt can see that code as const;
also, any file that the VM is trying to interpret
can be treated by the HW mem'mgt as
something that only the VM process can modify .
. finally, the VM has its own process space
and this should keep other processes
from corrupting its work space .
Labels:
addm,
attributes,
cap'based,
capabilities,
isolation,
mem'mgt,
safe pointers,
sandbox,
security,
type.tag,
vm,
xml
2013-03-09
virtual machine for obj'c services
1.23: addm/
simulates obj'c when obj'c is not available:
simulates obj'c when obj'c is not available:
Labels:
adda,
addm,
architecture,
dev.obj'c
2012-08-29
explorations of virtual memory
7.1: adda/vmem'mgt/intro:
. in our virtual memory stack system
we are replacing each of the stack's
subprogram activation records (act'rec's)
with a pointer to a resizable object
(ie, it points to an expandable array in the heap );
thus, the stack [8.29:
-- if we didn't have a stackless architecture -- ]
becomes an array of pairs:
( return address
, pointer to act'rec
) . if our allotted ram is getting full,
we can file the obj's attached to earlier parts of the stack
or even file earlier segments of a very long stack .
. in our virtual memory stack system
we are replacing each of the stack's
subprogram activation records (act'rec's)
with a pointer to a resizable object
(ie, it points to an expandable array in the heap );
thus, the stack [8.29:
-- if we didn't have a stackless architecture -- ]
becomes an array of pairs:
( return address
, pointer to act'rec
) . if our allotted ram is getting full,
we can file the obj's attached to earlier parts of the stack
or even file earlier segments of a very long stack .
Labels:
1st-class functions,
adda,
addm,
architecture,
mem'mgt,
pointers,
vmem
2012-08-18
one call for trees of operations
7.22: addm/one call for trees of operations:
. the key to efficiency will be
providing support for integrated use of
both oop and concrete types;
ie, if the tree needs to be done fast
or in a tight space,
then the compiler uses native types;
but if it needs to handle polymorphism,
then it uses the type-tags,
and sends it to an oop type .
biop* oop:
*: (binary operations)
. the problem with the popular oop model
is that it is not helping much for biop's;
so, we should consider the binary parameter
to be one obj;
and, we again have oop working .
[8.13:
. this has always been implicit in my design;
biop oop works by featuring
2 type tags per arg: (supertype, subtype);
eg, (number, integer);
and then we don't have to worry about
where to send a (float, integer)
because they both have
the same supertype: number .
. this note was just pointing out
that I was realizing the syntax x`f -- vs f(x) --
was ok; whereas, previously
I had howled that oop was absurd because
it turned (x * y) into x`*(y)
as shorthand for asking x's type mgt
to apply the *-operation to (x,y);
what oop needs to be doing is (x,y)`*
as a shorthand for calling the type mgt that is
the nearest supertype shared by both x and y,
and asking it to apply the *-operation to (x,y).
8.15: and of coure,
oop langs like c++ need to get their own lang
and stop trying to fit within C,
so then we can go back to (x*y)
as a way to write (x,y)`* .]
. the key to efficiency will be
providing support for integrated use of
both oop and concrete types;
ie, if the tree needs to be done fast
or in a tight space,
then the compiler uses native types;
but if it needs to handle polymorphism,
then it uses the type-tags,
and sends it to an oop type .
biop* oop:
*: (binary operations)
. the problem with the popular oop model
is that it is not helping much for biop's;
so, we should consider the binary parameter
to be one obj;
and, we again have oop working .
[8.13:
. this has always been implicit in my design;
biop oop works by featuring
2 type tags per arg: (supertype, subtype);
eg, (number, integer);
and then we don't have to worry about
where to send a (float, integer)
because they both have
the same supertype: number .
. this note was just pointing out
that I was realizing the syntax x`f -- vs f(x) --
was ok; whereas, previously
I had howled that oop was absurd because
it turned (x * y) into x`*(y)
as shorthand for asking x's type mgt
to apply the *-operation to (x,y);
what oop needs to be doing is (x,y)`*
as a shorthand for calling the type mgt that is
the nearest supertype shared by both x and y,
and asking it to apply the *-operation to (x,y).
8.15: and of coure,
oop langs like c++ need to get their own lang
and stop trying to fit within C,
so then we can go back to (x*y)
as a way to write (x,y)`* .]
Labels:
addm,
asynchronous,
concurrency,
ipc,
Promise
dynamic linking to video driver
7.11: bk.addm/Gordon Letwin`Inside OS#2:
[8.13: intro:
os/2 was microsoft's next big thing for 1998,
but it never happened despite being
a leap forward in security .
. in this reading session,
I was wondering how it dealt with the issue of
unstable drivers . intro's dynamic linking .]
p89: the familiar static linking:
. the linker handles static links by
noting which symbols are marked external
and hooking those up with similar symbols
to be found in accompanying .obj files .
p29: the new dynamic linking:
. dynamic linking is how operating systems can be
extended or patched by the user or the apps;
just like hardware can accept new circuit boards .
p13:
. because drivers needed protected mode
there would need to be a mode transition
with every write to the display
but we avoid this by having apps
not access device drivers directly;
rather they do so through dynamic linking .
[. how does that explain it?
is dyna'linking facilitating our ability to
run device drivers in user mode?]
p109:
. dyna'linking to the video display driver
is possible because it doesn't require
hardware interrupts;
drivers are generally located in the kernel
only because some are needing access to
hardware interrupts .
[8.13: intro:
os/2 was microsoft's next big thing for 1998,
but it never happened despite being
a leap forward in security .
. in this reading session,
I was wondering how it dealt with the issue of
unstable drivers . intro's dynamic linking .]
p89: the familiar static linking:
. the linker handles static links by
noting which symbols are marked external
and hooking those up with similar symbols
to be found in accompanying .obj files .
p29: the new dynamic linking:
. dynamic linking is how operating systems can be
extended or patched by the user or the apps;
just like hardware can accept new circuit boards .
p13:
. because drivers needed protected mode
there would need to be a mode transition
with every write to the display
but we avoid this by having apps
not access device drivers directly;
rather they do so through dynamic linking .
[. how does that explain it?
is dyna'linking facilitating our ability to
run device drivers in user mode?]
p109:
. dyna'linking to the video display driver
is possible because it doesn't require
hardware interrupts;
drivers are generally located in the kernel
only because some are needing access to
hardware interrupts .
Labels:
addm,
architecture,
drivers,
dyna'linking,
protected mode,
security
2012-01-31
addm's case instruction
addm/cstr/case/large domain space issues:
[1.8: intro:
. for a classic case stmt, one ensuring that
no more than one exec'path was activated,
the primary purpose of the variables (and arithmetic)
is to dynamically set the guard ranges .]
if case ranges can be variable expressions,
how does that work at the machine level?
. the alternative execution paths are numbered: 0...n,
so the case stmt is an array 0..n of
pointer to subroutine .
. the pointer could be an offset of the program counter,
so they can be word sized or even byte sized .
[1.8: case method depends on domain size:
. typically for each exec'path,
there is either a list of literals,
or else a pair of possibly variable expressions
delimiting a range .
. next there are 2 cases of domain space:
# small:
. the expression being cased is less than a byte,
or certainly not more than a 16bit word .
. in this case there is a direct mapping
from enum to exec'path number .
. we sort the literal cases,
thus allowing for a binary search during run-time .
. for the case ranges, a..b, c..d, ...,
we need the if/else if ... list:
case => a and case <= b ?
goto ...
else case => c and case <= d ?
goto ...
else ... .
# huge: ... ]
large domain space issues:
. how do the cases map to 0..n
when not enum literals?
that's when it loses the efficiency
that case stmt's are known for:
. it's eval'ing every exec'path guard expression
as well as the expression being cased,
then it might be reduced to crawling through
a huge {if/else if ...} list;
what it checks first might depend on profiling:
in the past eval's of this case stmt,
what percentage of time was each case chosen?
1.8: hashing intro:
. the expression is larger than small;
in this case, hashing might work?
in a hash the typical problem is
associating a name with an id# .
. when you find the name,
then you need to replace it with the id#;
you could place the string in a sorted container
in which the insertion operation was cheap;
and then do a binary search to find it,
or you could use the hash method:
. if you expect n strings,
then use an algorithm to turn the string into
a number that can range of n values;
eg, if there are less than 2*^16 values,
then your hash function should return 16bit ints .
. if 2 strings result in the same number
then they both go in the same bucket,
so when your search finds a bucket,
it has to crawl as with the binary search .
1.8: back to the casing problem:
. if the expression being cased is a huge number or string,
then with the literals used in the guard
we build a hash table,
and then hash the value being cased .
. when we find the match,
it's associated with the exec'path number we need .
[1.8: intro:
. for a classic case stmt, one ensuring that
no more than one exec'path was activated,
the primary purpose of the variables (and arithmetic)
is to dynamically set the guard ranges .]
if case ranges can be variable expressions,
how does that work at the machine level?
. the alternative execution paths are numbered: 0...n,
so the case stmt is an array 0..n of
pointer to subroutine .
. the pointer could be an offset of the program counter,
so they can be word sized or even byte sized .
[1.8: case method depends on domain size:
. typically for each exec'path,
there is either a list of literals,
or else a pair of possibly variable expressions
delimiting a range .
. next there are 2 cases of domain space:
# small:
. the expression being cased is less than a byte,
or certainly not more than a 16bit word .
. in this case there is a direct mapping
from enum to exec'path number .
. we sort the literal cases,
thus allowing for a binary search during run-time .
. for the case ranges, a..b, c..d, ...,
we need the if/else if ... list:
case => a and case <= b ?
goto ...
else case => c and case <= d ?
goto ...
else ... .
# huge: ... ]
large domain space issues:
. how do the cases map to 0..n
when not enum literals?
that's when it loses the efficiency
that case stmt's are known for:
. it's eval'ing every exec'path guard expression
as well as the expression being cased,
then it might be reduced to crawling through
a huge {if/else if ...} list;
what it checks first might depend on profiling:
in the past eval's of this case stmt,
what percentage of time was each case chosen?
1.8: hashing intro:
. the expression is larger than small;
in this case, hashing might work?
in a hash the typical problem is
associating a name with an id# .
. when you find the name,
then you need to replace it with the id#;
you could place the string in a sorted container
in which the insertion operation was cheap;
and then do a binary search to find it,
or you could use the hash method:
. if you expect n strings,
then use an algorithm to turn the string into
a number that can range of n values;
eg, if there are less than 2*^16 values,
then your hash function should return 16bit ints .
. if 2 strings result in the same number
then they both go in the same bucket,
so when your search finds a bucket,
it has to crawl as with the binary search .
1.8: back to the casing problem:
. if the expression being cased is a huge number or string,
then with the literals used in the guard
we build a hash table,
and then hash the value being cased .
. when we find the match,
it's associated with the exec'path number we need .
2011-11-30
evlan.org describes many #addx features
10.31: news.adda/lang"evlan/
evlan.org describes many addx features:
. Kenton Varda's Evlan has many addx features;
but, it "(currently has no type system);
and, it died in 2007.10.1, when the author "(got a real job)!
. what first caught my was cap-based security:
is its being a virtual operating system
which runs on top of any other operating system;
via a virtual machine that is the sole access
to the underlying platform's OS .
. by a complete platform,
he means it is not specialized to be
efficiently accessing all of bloated unix,
but to define what it needs,
and then have the vm access what it needs
for impl'ing the Evlan virtual platform .
adda's secure by language idea:
also implementing security at the hardware level;
this is the level where modularity can be enforced;
otherwise, malware could just jump to
what ever code there is with the permissions it needs .
(see qubes doc's) .]
-- that means esp'ly your browser --
are all written in this provable language ? ]
see: Capability-based Security is used in the
EROS operating system and the E programming language.
Also, Marc Stiegler et al.
addx's idea that concurrency is the key
to secure speed:
meant for a single CPU; eg, [written in adda:]
having only 19 keywords;
one of which is "(where)
which could have been said with a reused "(:) .]
Because Evlan is purely functional,
no function call is allowed to have "side effects".
. see www.acmqueue.com/modules.php?name=Content&pa=showpage&pid=225
(dead link, is this right: queue.acm.org/detail.cfm?id=864034)?
-- ask google where else that link can be found:
Natural programming languages and environments
Communications of the ACM September 2004/Vol. 47, No. 9 by Brad A. Myers, John F. Pane, Andy Ko
. another contrast to assembly-like imperative instructions
is addm's idea for a high-level interpreter:
instead of translating from an abstract syntax tree (AST)
into a sequence of assembly instructions;
addm, the vm, would try to evaluate an AST . ]
evlan.org describes many addx features:
. Kenton Varda's Evlan has many addx features;
but, it "(currently has no type system);
and, it died in 2007.10.1, when the author "(got a real job)!
. what first caught my was cap-based security:
Most of today's computer platforms operate under. the other main idea like addx
a fundamental assumption that has proven utterly false:
that if a user executes a program,
the user completely trusts the program.
This assumption has been made by just about
every operating system since Unix
and is made by all popular operating systems used today.
This one assumption is arguably responsible for
the majority of end-user security problems.
It is the reason malware -- adware, spyware, and viruses --
are even possible to write,
and it is the reason even big-name software
like certain web browsers
are so hard to keep secure.
Under the classical security model,
when a user runs a piece of software,
the user is granting that software the capability
to do anything that the user can do.
Under capability-based security,
when a user runs a piece of software,
the software starts out with no capabilities whatsoever.
The user may then grant specific capabilities to the program.
For example, the user might grant the program
permission to read its data files.
The user could also control whether or not
the program may access the network, play sounds, etc.
Most importantly,
all of these abilities can be controlled
independently on a per-program basis.
is its being a virtual operating system
which runs on top of any other operating system;
via a virtual machine that is the sole access
to the underlying platform's OS .
. by a complete platform,
he means it is not specialized to be
efficiently accessing all of bloated unix,
but to define what it needs,
and then have the vm access what it needs
for impl'ing the Evlan virtual platform .
adda's secure by language idea:
Even better than implementing security in an operating system,--[. keep in mind that this won't work without
is implementing it in a programming language:
developers are able to control the capabilities available
to each piece of code independently.
Good practice, then, would be to only give each component
the bare minimum capabilities that it needs to
perform its desired operation.
And, in fact, it is easier to give fewer capabilities,
so this is what programmers will usually do.
The result is that if a security hole exists in some part of your program,
the worst an attacker can do is
gain access to the capabilities that were given to that component.
also implementing security at the hardware level;
this is the level where modularity can be enforced;
otherwise, malware could just jump to
what ever code there is with the permissions it needs .
(see qubes doc's) .]
It is often possible to restrict all "dangerous" capabilities[. great to know if your os and every other app
to one very small piece of code within your program.
Then, all you have to do is make sure that
that one piece of code is secure.
When you are only dealing with a small amount of code,
it is often possible to prove that this is the case.
It is, in fact, quite possible to prove
that a program written in a capability-based language
is secure.
-- that means esp'ly your browser --
are all written in this provable language ? ]
see: Capability-based Security is used in the
EROS operating system and the E programming language.
Also, Marc Stiegler et al.
addx's idea that concurrency is the key
to secure speed:
Evlan has strong support for highly concurrent programmingMost of today's systems use imperative languages
being a purely functional language,
Where automatic parallelization is not possible,
there are very lightweight threads (without stacks).
meant for a single CPU; eg, [written in adda:]
processAll(results#t.type).proc:
for(i.int = results`domain):
results#i`= processItem results#i
).
. in c, it would process each item in order,adda aspires to be simple:
from first to last.
It does not give the system any room to
process two items in parallel
or in some arbitrary (most-efficient) order .
It is far easier to remember--[. however he has ada-itis and braggs about
multiple uses for a single construct
than to remember multiple constructs.
having only 19 keywords;
one of which is "(where)
which could have been said with a reused "(:) .]
interoperability done via compatibility layers:functional:
If and when the legacy software is no longer in use,
the compatibility layers can be removed,
leaving only the ultra-efficient Evlan core behind.
Because Evlan is purely functional,
no function call is allowed to have "side effects".
. see www.acmqueue.com/modules.php?name=Content&pa=showpage&pid=225
(dead link, is this right: queue.acm.org/detail.cfm?id=864034)?
-- ask google where else that link can be found:
Natural programming languages and environments
Communications of the ACM September 2004/Vol. 47, No. 9 by Brad A. Myers, John F. Pane, Andy Ko
This article presents findings on natural programming languages.natural programming:
(i.e. natural as in intuitive in the UserCentric way)
It advocates features for programming languages,
that are justified by these findings.
Areas include:
SyntacticSugar, that causes fewer errors
(it more intuitive/natural).
Better integrated debugging-support,
esp. for hypotheses about bug-causes.
which suggests that humans like to write rules of the form
"When condition X is true, do Y".
I am not yet sure to what extent this will
change the current system, however.
By "natural programming" we are aiming forFunctional Bytecode
the language and environment to work the way that
nonprogrammers expect.
We argue that if the computer language were to
enable people to express algorithms and data
more like their natural expressions,
the transformation effort would be reduced.
Thus, the Natural Programming approach
is an application of the standard user-centered design process
to the specific domain of programming languages and environments.
[at that link is a] good short summary of the
Natural Programming project here at CMU.
We're affiliated with the EUSES consortium,
which is a group of universities working on
end-user software engineering.
google's 2007 Update on the Natural Programming Project
Bytecode for the Evlan virtual machine--[11.30:
uses a functional representation of the code.
In contrast, the VMs of Java and .NET
use assembly-like imperative instructions.
. the GNU C Compiler's "Static Single Assignment" phase
is actually very close to being equivalent to
a purely functional language. [no overwrites]
If GCC does this internally just to make optimization easier,
then it doesn't take a huge stretch to imagine a system which
compiles imperative code to functional bytecode .
. another contrast to assembly-like imperative instructions
is addm's idea for a high-level interpreter:
instead of translating from an abstract syntax tree (AST)
into a sequence of assembly instructions;
addm, the vm, would try to evaluate an AST . ]
The purpose of constraining variables to types
is to get the compiler to check your work for bugs
and optimize code as well .
. if you could specify that an index to an array
must be less than the size of the array
-- and if the compiler could then check for that --
then buffer overruns (one of the most common bugs
leading to security holes in modern software)
would be completely eliminated.
(run-time bounds checking can prevent buffer overruns,
but then users see your app shutting down unexpectedly).
Labels:
adda,
addm,
addx,
cap'based,
capabilities,
concurrency,
e-lang,
functional,
microkernel,
security
2011-05-02
llvm's role in addx
4.22: addm/llvm's role in addx:
. why can't addm just run llvm code?
isn't elegance a dumb place to put an asm lang?
# c-friendly:
the whole point of addm is to
minimize dependencies;
many systems have no llvm system
but do have a c compiler .
# simplicity = involvement:
. while llvm is all about efficiency,
addx is about making computers accessible .
. just as there can be
cpu's designed for a language,
addm's purpose as a virtual machine
is to match the architecture of adda .
. llvm does fit in as a module:
it would be a tremendous achievement
-- on par with Apple's llvm for obj'c --
to have a way for directly translating
from addm to llvm
rather than the c link used now:
adda -> c -> llvm .
. why can't addm just run llvm code?
isn't elegance a dumb place to put an asm lang?
# c-friendly:
the whole point of addm is to
minimize dependencies;
many systems have no llvm system
but do have a c compiler .
# simplicity = involvement:
. while llvm is all about efficiency,
addx is about making computers accessible .
. just as there can be
cpu's designed for a language,
addm's purpose as a virtual machine
is to match the architecture of adda .
. llvm does fit in as a module:
it would be a tremendous achievement
-- on par with Apple's llvm for obj'c --
to have a way for directly translating
from addm to llvm
rather than the c link used now:
adda -> c -> llvm .
2010-04-01
battery-saving architecture
3.27: addx/batt'saving architecture:
notifications:
. without notification systems or hardware interrupts
ipc (interprocess communication) requires
a polling loop which uses power all the time
just checking for whether anything needs power!
. notifications work by having service agents
tell the os what they are checking for,
and then the os will call them back when it happens .
power-saving mode:
. another needed control system is batt'savings mode;
because, agents can provide
higher levels of service if they know
whether the user has nearby access to more batt's .
. services can vary their levels of service by
putting optional services in a separate thread,
and then marking that thread as optional
which then affects its scheduling priority .
. the user can indicate how much to conserve,
and then the system sets a power.conserve.flag .
. the scheduler can use that flag
to adjust the way it schedules the optionals .
. put all optional work on a todo,
else if it's now-or-never,
tell the user what's not being done
while on min drain mode .
. another option while employing a user agent
is letting the agent handle the annoying,
repetitive questions from a service .
. the os uses the average recent rates of cpu useage
to estimate relative batt'life or work levels .
. the os also considers batt's charge age:
if the batt' is near the end of life,
then optional services should be cut anyway
since being in that state worsens the amount of
batt'drain per work done .
. if the system is working more than minimally,
it can then ask the user
whether they'd rather have more batt'life .
. it could do this with an alert
or an on-going notification (animated icons)
for both work intensity and remaining energy;
but, on some systems,
the remaining energy is not easy to measure .
notifications:
. without notification systems or hardware interrupts
ipc (interprocess communication) requires
a polling loop which uses power all the time
just checking for whether anything needs power!
. notifications work by having service agents
tell the os what they are checking for,
and then the os will call them back when it happens .
power-saving mode:
. another needed control system is batt'savings mode;
because, agents can provide
higher levels of service if they know
whether the user has nearby access to more batt's .
. services can vary their levels of service by
putting optional services in a separate thread,
and then marking that thread as optional
which then affects its scheduling priority .
. the user can indicate how much to conserve,
and then the system sets a power.conserve.flag .
. the scheduler can use that flag
to adjust the way it schedules the optionals .
. put all optional work on a todo,
else if it's now-or-never,
tell the user what's not being done
while on min drain mode .
. another option while employing a user agent
is letting the agent handle the annoying,
repetitive questions from a service .
. the os uses the average recent rates of cpu useage
to estimate relative batt'life or work levels .
. the os also considers batt's charge age:
if the batt' is near the end of life,
then optional services should be cut anyway
since being in that state worsens the amount of
batt'drain per work done .
. if the system is working more than minimally,
it can then ask the user
whether they'd rather have more batt'life .
. it could do this with an alert
or an on-going notification (animated icons)
for both work intensity and remaining energy;
but, on some systems,
the remaining energy is not easy to measure .
Labels:
adda,
addm,
addx,
ipc,
notifications
2010-03-31
llvm and .NET or lisp
3.20: addx/llvm and .NET or lisp:
. how did llvm fit in with .net?
the .net was an improvement over bytecode;
it had a binary format of high-level code
-- call it wordcode --
and any language could compile to that wordcode .
. there's security in the wordcode,
since it's easy to verify it's not malware,
then a platform-specific runtime can
translate wordcode into an exec.file .
. llvm presents itself as the universal asm.code
(source code of a machine instruction set)
so one quick way to roll a wordcode compiler
is to compile wordcode to asm.code .
. then the llvm for that platform
will do all the heavy lifting of
jit-compiling and dynamic morphing
that keeps the asm.code mirrored in
the most efficient version of native code
for the current circumstances .
. there need be no practical difference between
word.code and the etrees of lisp
and perhaps that's why Stallman would
downplay .net as a Microsoft false freebee .
. what we really need from .net
is something more lisp-like
where the only constant to the std
is that etrees describe the conventions
of the current package .
. what we really need from lisp
is the same binary std for word.code
that .net calls assemblies .
. at the moment lisp's std is text-based
not binary -- xml is the same thing:
people should be communicating with that
only as a last resort,
when no binary convention has been agreed upon .
3.23:
. Mono from SVN is now able to use LLVM as a backend .
3.22: addx/security/review:
. what was the system that offered better security than
soa's token-based (smart-carded) logged messaging ?
. how did it differ from virtualization ?
. the first wall starts with using source code
rather than binaries:
. a program is an adda binary
that is then compiled to addx code
which is native code but can access only addx
(assured from being compiled by adda),
and then only addx can access actual devices .
. with this separation of power,
the program can tell addx what it needs
and then the user can tell addx
whether those needs sound reasonable .
. an adda binary is an etree (a parse tree)
where half of the compilation is already done:
. after adda text is parsed into etrees,
the etrees are safely translated by adda
to a system lang for compilation into a native binary .
. in essence the adda program has written the program;
if the original author had
programmed in the system lang'
then it could unsafely have
direct access to the machine .
. this is not unlike the .net security strategy;
but, addx is not concerned about supporting
multiple source lang's . [3.31: nevertheless,
adda's etree could be the target of
some other lang's compiler, for piping:
[your lang] -> adda`etree -> [some adda`backend] ]
. both { .net, adda } have a binary format
for sharing code, but .net's assemblies
may be more thoroughly processed than parse trees .
[3.31: and conversely, parse trees are in a
more modifiable, machine-comprehensible state
than .net's assemblies .]
. the ideal situation is to translate adda etrees
directly into llvm code,
but the easiest way to build addx in a portable way
is translating adda etree to the source text
of the language that the target platform prefers;
eg, when on the mac, the target language is
c, obj'c, and the cocoa framework;
while on linux and elsewhere,
it would be c, c++, and the qt framework .
. on the Android OS, the main source of
security is a slow vm
that compounds the speed hit caused by
addx's security not allowing
direct access to anything;
addx is asking every cross-module communication:
"(who are you,
and do you have permissions for that?)
. then if addx is to run dynamically modified code,
it's running a vm within a vm,
and things get even slower!
. the .net idea of security is much faster
than any vm security (even Dalvik)
but perhaps that speed
would come at the cost of space
-- the Android fits within 64mb .
. or,
perhaps because of patents,
any open system that was like .net
would not be easy to defend .
. the primary reason I'm not bothering to check
is that .net's idea is not nearly as simple as mine!
. you can avoid a lot of low-level programming
if you just learn the target's high-level language,
and do a simple lang-to-lang translation .
[3.31:
. actually what .net is essentially doing is
unpacking assemblies into etrees,
checking them the way adda would,
then translating them to
the etrees of some native compiler,
and compiling them dynamically .
. it's that dynamics that gives it the name .net;
that's the basis for dynamically accepting code
from other websites in a safe way .
. it may not need to get low-level,
but translating between etree formats
is definitely more complex than source-to-source .]
. how did llvm fit in with .net?
the .net was an improvement over bytecode;
it had a binary format of high-level code
-- call it wordcode --
and any language could compile to that wordcode .
. there's security in the wordcode,
since it's easy to verify it's not malware,
then a platform-specific runtime can
translate wordcode into an exec.file .
. llvm presents itself as the universal asm.code
(source code of a machine instruction set)
so one quick way to roll a wordcode compiler
is to compile wordcode to asm.code .
. then the llvm for that platform
will do all the heavy lifting of
jit-compiling and dynamic morphing
that keeps the asm.code mirrored in
the most efficient version of native code
for the current circumstances .
. there need be no practical difference between
word.code and the etrees of lisp
and perhaps that's why Stallman would
downplay .net as a Microsoft false freebee .
. what we really need from .net
is something more lisp-like
where the only constant to the std
is that etrees describe the conventions
of the current package .
. what we really need from lisp
is the same binary std for word.code
that .net calls assemblies .
. at the moment lisp's std is text-based
not binary -- xml is the same thing:
people should be communicating with that
only as a last resort,
when no binary convention has been agreed upon .
3.23:
. Mono from SVN is now able to use LLVM as a backend .
3.22: addx/security/review:
. what was the system that offered better security than
soa's token-based (smart-carded) logged messaging ?
. how did it differ from virtualization ?
. the first wall starts with using source code
rather than binaries:
. a program is an adda binary
that is then compiled to addx code
which is native code but can access only addx
(assured from being compiled by adda),
and then only addx can access actual devices .
. with this separation of power,
the program can tell addx what it needs
and then the user can tell addx
whether those needs sound reasonable .
. an adda binary is an etree (a parse tree)
where half of the compilation is already done:
. after adda text is parsed into etrees,
the etrees are safely translated by adda
to a system lang for compilation into a native binary .
. in essence the adda program has written the program;
if the original author had
programmed in the system lang'
then it could unsafely have
direct access to the machine .
. this is not unlike the .net security strategy;
but, addx is not concerned about supporting
multiple source lang's . [3.31: nevertheless,
adda's etree could be the target of
some other lang's compiler, for piping:
[your lang] -> adda`etree -> [some adda`backend] ]
. both { .net, adda } have a binary format
for sharing code, but .net's assemblies
may be more thoroughly processed than parse trees .
[3.31: and conversely, parse trees are in a
more modifiable, machine-comprehensible state
than .net's assemblies .]
. the ideal situation is to translate adda etrees
directly into llvm code,
but the easiest way to build addx in a portable way
is translating adda etree to the source text
of the language that the target platform prefers;
eg, when on the mac, the target language is
c, obj'c, and the cocoa framework;
while on linux and elsewhere,
it would be c, c++, and the qt framework .
. on the Android OS, the main source of
security is a slow vm
that compounds the speed hit caused by
addx's security not allowing
direct access to anything;
addx is asking every cross-module communication:
"(who are you,
and do you have permissions for that?)
. then if addx is to run dynamically modified code,
it's running a vm within a vm,
and things get even slower!
. the .net idea of security is much faster
than any vm security (even Dalvik)
but perhaps that speed
would come at the cost of space
-- the Android fits within 64mb .
. or,
perhaps because of patents,
any open system that was like .net
would not be easy to defend .
. the primary reason I'm not bothering to check
is that .net's idea is not nearly as simple as mine!
. you can avoid a lot of low-level programming
if you just learn the target's high-level language,
and do a simple lang-to-lang translation .
[3.31:
. actually what .net is essentially doing is
unpacking assemblies into etrees,
checking them the way adda would,
then translating them to
the etrees of some native compiler,
and compiling them dynamically .
. it's that dynamics that gives it the name .net;
that's the basis for dynamically accepting code
from other websites in a safe way .
. it may not need to get low-level,
but translating between etree formats
is definitely more complex than source-to-source .]
meeting Android OS and Dalvik VM
3.26: summary:
. android as a subtopic of addx
is concerned with how androids ideas
could contribute to or be reused by addx .
. of immediate interest is their vm,
which is significantly more efficient than the java vm .
. some are upset by the changes it made to linux
while others call it a refreshingly clean architecture;
the body language of the google staff who were
defending the changes indicates that a lot of
compromises had to be made in order to appease
the various business-related needs
of the other oha (Open Handset Alliance) members .
. it's still an open platform,
but users can only modify their system as root users
-- a line crossed that can void their contract .
. unlike the Apple app'store model,
there is nothing in the Android model
that would conflict with addx`goals .
3.19: web.addx/android:
. wiki's android .
. some app dev's call android a gutted linux .
thoughts of an android system engineer:
Matt Porter was heavily involved in the MIPS and PPC ports of Android:
what's it like for the dev'ers? [3.20:
. those concerns were by app dev's, not users!
they are crying about not being able to use
the usual linux tool chains to dev' for linux .
. it's likely that dev'ers are not accustomed to
having big brother defending the consumer
by creating so much burden for the sake of efficiency
and security -- the sad fact is
that linux is a malware magnet just like windows
exactly because
those platforms let dev's have their way .]
. Android is now owned by the Open Handset Alliance
where google is just one of many members;
the oha has an interesting page showing
all the roles in the communications industry,
and all the players in controlling Android .
lecture of why Android is based on a new vm:
. libraries written in C and other languages
can be compiled to ARM native code and installed
by using the Android Native Development Kit (NDK) .
it would apparently be best to adapt your
translator: adda -> android's java
rather than: adda -> c for android's ndk .
3.20:
. but how much "(complexity) are we talking about here?
creating an adda compiler for a completely new language
is itself a significantly complex problem .
]
that the iphone os had to make:
you could allow app' dev'ers to write native code
but then require them to
submit their code to human inspection
and risk having it rejected by hooded judges;
or,
you could do it the android way:
let anyone come without inspection
but then keep them tied up in a rather slow vm .]
small ram, no hard drive, and a slow cpu .
. total ram on many platforms
is projected to be limited to 64mb;
and, app's typically have only 20 mb to play in .
. the system lib is 10mb --[ steep learning curve
but allows reuse of a lot of code .]
. by having a large library of native code
there's reduced need for jit,
because each bytecode instruction
represents many lines of highly-optimized native code .
. java's jar.files arrange all info within
the class it belongs to;
whereas [Dalvik EXecutable] files (dex.files)
contain the same info but arranged for
maximal sharing, with segments for vari'data,
and sharing of constant's in these segments:
string, type, proto[function signature], field, method .
. using that organization,
dex.files, even when uncompressed, take up
less space than jar files, even when compressed .
. cpu speeds in phones are expected to be
250 .. 500 mhz, bus speed is 100 mhz;
with a 16..32k cache .
types of mem:
( clean: unmodified
. dirty: modified -- app's structs and heaps
. private: accessed by only one process
. shared: accessed by multiple proceses .
)
. {clean vs dirty} mark bits are kept separate
instead of being embedded with each obj .
advantages: reduces cache misses,
avoids unsharing and padding needs;
. an android app's install-time work
is the same as ada's compile-time work:
time for a lot of verifications and optimizations .
[3.26:
. this is done with the desktop dev'tools,
not on the mobile unit itself .
. the toolchain converts java lang'
to java vm byte code (.jar)
and then converts .jar's to android vm bytecode .]
. being register-based avoids instruction dispatch:
30% fewer instructions and code units,
35% more bytes per instruction
but it can fetch 2 bytes at a time;
eg, compare jar's vs dex's for this code:
sumarray(arr#.int).long:
( sum.long=0
; for (i.int in arr) sum`+ i
; return sum
)
. java bytecode does this in
(25 bytes 14 dispatches 45 reads, 16 writes)
. android bytecode does it in
(18 bytes 6 dispatches 19 reads, 6 writes) .
. humans can interact at 10..30 reactions/s
and expect video at 25..30 frames/s,
audio sync'd at 10 times/s .
. calling those times the human scale,
contrast that with computer scale:
just handling human interaction takes
far less than 1% of cpu's capacity .
. mobile devices run on tiny batteries;
so, your app should be going back to sleep
rather than staying in a loop,
and when it does loop,
some loop`heads are more efficient than others;
they show you a list of loop head coding styles
and their impact on cpu time (battery life).
. why base app dev'ing on the java lang?
much of openware work is done in that;
and, there are compilers for many other lang's
that will translate to java classes (jar.files);
android has a tool that can convert these jar's
to android's bytecode (dex's)
-- but they don't do that dynamically on the handheld,
so the jython idea is out .
[3.26:
. a twist of the language question would be
why use the java lang but then toss the java vm idea ?
. the really progressive idea is llvm:
a tool that converts llvm code to your vm's bytecode .
--. Apple's clang project translates obj'c to llvm;
the llvm community is then translating llvm to
a variety of cpu's, including mac's x86 .]
[3.26: llvm`backend for android`Dalvik
Patrick Walton
Date: Fri, 09 Jan 2009 22:04:16 -0800
Subject: Type safety of the Dalvik VM?
Date: Fri, 9 Jan 2009 22:11:09 -0800
Date: Sun, 11 Jan 2009 10:34:41 -0800 (PST)
--. it sounds like the best bet is to find llvm ->java vm .
David Roberts 2009
JVM Backend
2008: only “foreign” language to run on Android is Scala .
Invoke JNI based methods (Bridging C/C++ -> Java)
Developing using C++
StephC
Date: Tue, 13 Nov 2007 03:53:17 -0000
Local: Mon, Nov 12 2007 8:53 pm
. android as a subtopic of addx
is concerned with how androids ideas
could contribute to or be reused by addx .
. of immediate interest is their vm,
which is significantly more efficient than the java vm .
. some are upset by the changes it made to linux
while others call it a refreshingly clean architecture;
the body language of the google staff who were
defending the changes indicates that a lot of
compromises had to be made in order to appease
the various business-related needs
of the other oha (Open Handset Alliance) members .
. it's still an open platform,
but users can only modify their system as root users
-- a line crossed that can void their contract .
. unlike the Apple app'store model,
there is nothing in the Android model
that would conflict with addx`goals .
3.19: web.addx/android:
. wiki's android .
. some app dev's call android a gutted linux .
thoughts of an android system engineer:
Matt Porter was heavily involved in the MIPS and PPC ports of Android:
"(Android is not Linux in the strict sense of the wordHarald Welte summarizes Matt Porter's critique:
because important userspace components are missing,
thereby making Android comparatively inaccessible and inflexible.
Android uses, for example, its own mount system
that works with MMC subsystems out-of-box
rather than with USB devices.
Support is missing for udev, glibc, and
SysV process communication,
but are replaced by a somewhat hard to change,
hard-coded implementation from the Open Handset Alliance.
. Android makes no use of tslib for touchscreen support
and lacks effective Ethernet support.
[More arguments are included in his set of slides.]
Second, the Android community is
lagging behind other Linux and open source communities,
partly because the platform is commonly developed
outside the Android Open Source Project (AOSP) tree
and given less priority in the open repository.) .
"(. google has simply thrown 5-10 years. Android's design is defended:
of Linux userspace evolution into the trashcan
and re-implemented it partially for no reason.
Things like hard-coded device lists/permissions in object code
rather than config files,
the lack of support for hot-plugging devices (udev),
the lack of kernel headers.
A libc that throws away System V IPC
that every unix/Linux software developer takes for granted.
The lack of complete POSIX threads.
Just one more practical example:
You cannot even plug a USB drive to an android system,
since /dev/sd* is not an expected device name in their
hardcoded hotplug management.)
"(Android's specialized and inflexible character. the android userspace sounds pretty tight;
was a result of performance and resource reasons.)
what's it like for the dev'ers? [3.20:
. those concerns were by app dev's, not users!
they are crying about not being able to use
the usual linux tool chains to dev' for linux .
. it's likely that dev'ers are not accustomed to
having big brother defending the consumer
by creating so much burden for the sake of efficiency
and security -- the sad fact is
that linux is a malware magnet just like windows
exactly because
those platforms let dev's have their way .]
. Android is now owned by the Open Handset Alliance
where google is just one of many members;
the oha has an interesting page showing
all the roles in the communications industry,
and all the players in controlling Android .
lecture of why Android is based on a new vm:
. libraries written in C and other languages
can be compiled to ARM native code and installed
by using the Android Native Development Kit (NDK) .
The NDK converts c to bytecode not the cpu's native code--[. if c is not your favorite lang,
Simply re-coding a method to run in C
usually does not result in a large performance increase
but does always increase application complexity.
The NDK can, however, be an effective way to
reuse a large corpus of existing C/C++ code.
it would apparently be best to adapt your
translator: adda -> android's java
rather than: adda -> c for android's ndk .
3.20:
. but how much "(complexity) are we talking about here?
creating an adda compiler for a completely new language
is itself a significantly complex problem .
]
. as per the usual security measures--[. this is the same trade-off
android distinguishes the app dev'
from the platform (system) dev' .
. only the system dev's can make good use of
Java Native Interface (JNI),
a programming framework that allows
byte code running in a Java VM
to call and to be called by cpu's native code .
that the iphone os had to make:
you could allow app' dev'ers to write native code
but then require them to
submit their code to human inspection
and risk having it rejected by hooded judges;
or,
you could do it the android way:
let anyone come without inspection
but then keep them tied up in a rather slow vm .]
Native classes can be called from Java code. the new vm is optimized for platforms having
running under the Dalvik VM
using java's System.loadLibrary call .
Unlike most Java VMs that are
stack-based, running Sun`java bytecode,
and having problems with concurrency,
the Dalvik VM, with a new byte code set,
is register-based, and is designed to
efficiently support concurrency .
small ram, no hard drive, and a slow cpu .
. total ram on many platforms
is projected to be limited to 64mb;
and, app's typically have only 20 mb to play in .
. the system lib is 10mb --[ steep learning curve
but allows reuse of a lot of code .]
. by having a large library of native code
there's reduced need for jit,
because each bytecode instruction
represents many lines of highly-optimized native code .
. java's jar.files arrange all info within
the class it belongs to;
whereas [Dalvik EXecutable] files (dex.files)
contain the same info but arranged for
maximal sharing, with segments for vari'data,
and sharing of constant's in these segments:
string, type, proto[function signature], field, method .
. using that organization,
dex.files, even when uncompressed, take up
less space than jar files, even when compressed .
. cpu speeds in phones are expected to be
250 .. 500 mhz, bus speed is 100 mhz;
with a 16..32k cache .
types of mem:
( clean: unmodified
. dirty: modified -- app's structs and heaps
. private: accessed by only one process
. shared: accessed by multiple proceses .
)
. {clean vs dirty} mark bits are kept separate
instead of being embedded with each obj .
advantages: reduces cache misses,
avoids unsharing and padding needs;
. an android app's install-time work
is the same as ada's compile-time work:
time for a lot of verifications and optimizations .
[3.26:
. this is done with the desktop dev'tools,
not on the mobile unit itself .
. the toolchain converts java lang'
to java vm byte code (.jar)
and then converts .jar's to android vm bytecode .]
. being register-based avoids instruction dispatch:
30% fewer instructions and code units,
35% more bytes per instruction
but it can fetch 2 bytes at a time;
eg, compare jar's vs dex's for this code:
sumarray(arr#.int).long:
( sum.long=0
; for (i.int in arr) sum`+ i
; return sum
)
. java bytecode does this in
(25 bytes 14 dispatches 45 reads, 16 writes)
. android bytecode does it in
(18 bytes 6 dispatches 19 reads, 6 writes) .
. humans can interact at 10..30 reactions/s
and expect video at 25..30 frames/s,
audio sync'd at 10 times/s .
. calling those times the human scale,
contrast that with computer scale:
just handling human interaction takes
far less than 1% of cpu's capacity .
. mobile devices run on tiny batteries;
so, your app should be going back to sleep
rather than staying in a loop,
and when it does loop,
some loop`heads are more efficient than others;
they show you a list of loop head coding styles
and their impact on cpu time (battery life).
. why base app dev'ing on the java lang?
much of openware work is done in that;
and, there are compilers for many other lang's
that will translate to java classes (jar.files);
android has a tool that can convert these jar's
to android's bytecode (dex's)
-- but they don't do that dynamically on the handheld,
so the jython idea is out .
[3.26:
. a twist of the language question would be
why use the java lang but then toss the java vm idea ?
. the really progressive idea is llvm:
a tool that converts llvm code to your vm's bytecode .
--. Apple's clang project translates obj'c to llvm;
the llvm community is then translating llvm to
a variety of cpu's, including mac's x86 .]
[3.26: llvm`backend for android`Dalvik
Patrick Walton
Date: Fri, 09 Jan 2009 22:04:16 -0800
Subject: Type safety of the Dalvik VM?
. I'm interested in writing an LLVM backend for Dalvik"Andrew Stadler (Google)"
(allowing C code to be compiled to Dalvik),
Certainly, the apparent lack of type safety
makes it easy to write a C-to-Dalvik compiler,
but I want to make sure that my methods won't depend on
exploiting bugs that are going to be fixed.
Date: Fri, 9 Jan 2009 22:11:09 -0800
. make yourself very familiar with the bytecode verifier,fadden
which runs over all java bytecode
before it gets to the actual Dalvik VM,
and provides expected the safety checks .
Date: Sun, 11 Jan 2009 10:34:41 -0800 (PST)
. type safety is a goal, and in some cases
Dalvik is safe to a greater extent than other VMs
(e.g. it's more picky about the width of sub-32-bit integers).
Some details are available in dalvik/docs/verifier.html.
Identifying type-safety issues during verification
allows us to avoid making those checks at runtime,
improving performance. Because DEX files are read-only
to everything but a couple of system processes,
we can (usually) do the verification
at app`install-time, and not have to repeat it
at app`startup-time.
--. it sounds like the best bet is to find llvm ->java vm .
David Roberts 2009
JVM Backend
I've written a backend for LLVM
that allows LLVM IR to be transformed to a Java/JVM class file
2008: only “foreign” language to run on Android is Scala .
Invoke JNI based methods (Bridging C/C++ -> Java)
Developing using C++
StephC
Date: Tue, 13 Nov 2007 03:53:17 -0000
Local: Mon, Nov 12 2007 8:53 pm
J2ME is a real nightmare for mobile games developers,
it's even considered by many as the reason why
mobile games aren't really suceeding so far.
John Carmack, a game developer:"(It turns out that I'm a lot less fond of Java]
for resource-constrained work.
I remember all the little gripes with Java's language,
like no unsigned bytes,
and the consequences of strong typing, like no memset,
and the inability to read resources into
anything but a char array,
...
Even when compiled to completely native code,
performance is hobbled by Java semantic requirements
like range checking on every array access .)
2010-03-30
notifications support
3.16: adda/notifications:
. after reading about dev.mac's notification system
I'm wondering how adda, too,
can provide this service .
. existing parts of adda should be reused
when possible;
eg, all coprograms having mailboxes .
[3.29: coprograms include all concurrent processes:
coroutines, tasks, type-mgt's,
and other background services .]
multi-level interfaces:
. you can see notifications in action
during a debug watch-point command:
that makes execution stop at
any code that modifies a given obj .
. this is economically possible because of
the multiple levels of interface
where all the high-level entries
use just one low level write function;
so, then an obj's write function
can be replaced with a
(call me instead of writing);
likewise, in the same way that
classes can be subclassed,
objects within the same class can be
individually customized
so as to facilitate notifications .
3.17:
. this can be done while still
reusing class functions
by having every object include
along with a tag
a watched bit for impl'ing notifications .
. the way to identify an object
is that every object
-- even when on the stack rather than heap --
is part of a process with an id#
and then part of a call path:
eg, process#2/sub1/sub2
. or,
every object is part of an act'rec ...
(this needs a proof,
but the future looks bright);
anyway,
the run-time exec' is checking the watched.bit
during each object access,
and if it's set,
then it's got that obj's id,
which is used to look up the object
in a notification database
to see what about the object is customized .
. the db can provide
the conditions under which
notifications need to be sent out,
and can also keep the list of
which mailboxes should be notified,
or what actions need to occur
if a certain trigger is set .
3.16: addm/notifications:
. assuming every obj has a watched bit
for impl'ing notifications:
. a problem with letting arbitrary obj's be watched
is it brings down performance across the entire system
regardless of how little
the watching feature is used ?
. for obj watching to be efficient,
have 2 versions of addm:
one will be checking the watch bit,
while the other version will ignore it .
. every process has a pointer indicating
which virtual processor it's being run on .
. after reading about dev.mac's notification system
I'm wondering how adda, too,
can provide this service .
. existing parts of adda should be reused
when possible;
eg, all coprograms having mailboxes .
[3.29: coprograms include all concurrent processes:
coroutines, tasks, type-mgt's,
and other background services .]
multi-level interfaces:
. you can see notifications in action
during a debug watch-point command:
that makes execution stop at
any code that modifies a given obj .
. this is economically possible because of
the multiple levels of interface
where all the high-level entries
use just one low level write function;
so, then an obj's write function
can be replaced with a
(call me instead of writing);
likewise, in the same way that
classes can be subclassed,
objects within the same class can be
individually customized
so as to facilitate notifications .
3.17:
. this can be done while still
reusing class functions
by having every object include
along with a tag
a watched bit for impl'ing notifications .
. the way to identify an object
is that every object
-- even when on the stack rather than heap --
is part of a process with an id#
and then part of a call path:
eg, process#2/sub1/sub2
. or,
every object is part of an act'rec ...
(this needs a proof,
but the future looks bright);
anyway,
the run-time exec' is checking the watched.bit
during each object access,
and if it's set,
then it's got that obj's id,
which is used to look up the object
in a notification database
to see what about the object is customized .
. the db can provide
the conditions under which
notifications need to be sent out,
and can also keep the list of
which mailboxes should be notified,
or what actions need to occur
if a certain trigger is set .
3.16: addm/notifications:
. assuming every obj has a watched bit
for impl'ing notifications:
. a problem with letting arbitrary obj's be watched
is it brings down performance across the entire system
regardless of how little
the watching feature is used ?
. for obj watching to be efficient,
have 2 versions of addm:
one will be checking the watch bit,
while the other version will ignore it .
. every process has a pointer indicating
which virtual processor it's being run on .
Labels:
adda,
addm,
concurrency,
cstr,
notifications
2010-01-30
addm`exceptions
1.20: addm/exceptions:
. I first had the idea of stacking
more than the usual return address,
for the 2 cases of whether or not
the system was in an exception-raised mode .
. however, in recursive situations
that would use a lot more memory;
ie, an extra pointer per call,
when the alternative is to have a
conditional at the call's return point
which could be shared by all those recursive calls .
. then I found a way in which
it wouldn't even need the conditional.stmt:
. for a call.stmt that may involve an exception,
place a goto.stmt after it;
then make the call's return address
to land beyond the goto.stmt .
. in an exception.mode, the system would
treat the return address differently,
decrementing it before using it,
which would then cause the return to
land on the goto that would jump to the
exception-casing stmt,
or if letting a super.routine handle it,
the goto could just land at the
[cleanup and exit].section .
. I first had the idea of stacking
more than the usual return address,
for the 2 cases of whether or not
the system was in an exception-raised mode .
. however, in recursive situations
that would use a lot more memory;
ie, an extra pointer per call,
when the alternative is to have a
conditional at the call's return point
which could be shared by all those recursive calls .
. then I found a way in which
it wouldn't even need the conditional.stmt:
. for a call.stmt that may involve an exception,
place a goto.stmt after it;
then make the call's return address
to land beyond the goto.stmt .
. in an exception.mode, the system would
treat the return address differently,
decrementing it before using it,
which would then cause the return to
land on the goto that would jump to the
exception-casing stmt,
or if letting a super.routine handle it,
the goto could just land at the
[cleanup and exit].section .
Labels:
addm,
exceptions
2009-12-31
bytecode cache
12.1: addm/bytecode.cache:
. just as there are high-speed registers for an act'rec's hotspots,
there could be registers for the bytecode segment .
. if the code were in registers,
then the addresses could be absolute instead of
offsets from the reg#pc (program counter) .
. share by swapping data segs .
Labels:
addm
2009-12-30
addm's save registers
11.29: sci.addm/sharing registers sub's efficiently:
. when an act'rec saves registers
couldn't this be done more space efficient by
having each subprogram bak its own?
ie,
main has act'rec hold all var's
but while in control,
it copies its hot var's to reg's .
. when it calls a sub,
it first updates the stack with the current reg'val's
then calls a sub' that can do the same thing .
. in case of concurrency,
the task frame needs a space to hold reg's
in a place where the scheduler can find them .
. this could be an unwise idea
depending on what the efficiencies are
and the ways for registers to be moved to stack .
. as a routine proceeds, its use of reg's varies;
so there's less code used in the idea of having
a single register_save.rec for each act'rec .
. they use one code to save vector-given regs
to curr's reg'save.rec [as is typical of asm instructions
where each register is represented by a bit
telling the reg'save.instruction which reg's to save .]
. whatever's right,
depends also on how much info is sharable .
. the caller of module cannot know
what the module's reg needs are,
so it must assume worst-case:
all used regs must be saved .
. in a local subsystem,
the sub's function type can indicate which reg's its using,
this is a good idea in a modular system .
. perhaps there can be codes for
2 styles of calls in {modules, subordinates} .
Labels:
addm
literateprograms.org
11.5: news.addx/literateprograms.org:
. if you like eiffel add some code to en.literateprograms.org?
in fact, there is not much eiffel code:
but there is plenty of c code !!!
and java:
. the clean way to find what is getting attention
is the portable.category
. interesting to find out that bignum math is best done by fft.
Labels:
adda,
addm,
algorithms,
literate prog'ing,
openware
Subscribe to:
Posts (Atom)