Showing posts with label Promise. Show all posts
Showing posts with label Promise. Show all posts

2012-11-15

task.type's syntax and semantics

adda/co:
8.16: syntax:
. when an object is given a message,
it is the object's type mgt that applies a method;
and, these type mgr's are tasks (co.programs);
but how do we get the obj's themselves
to be tasks? (ie, running code on a separate thread).

2012-08-18

one call for trees of operations

7.22: addm/one call for trees of operations:

. the key to efficiency will be
providing support for integrated use of
both oop and concrete types;
ie, if the tree needs to be done fast
or in a tight space,
then the compiler uses native types;
but if it needs to handle polymorphism,
then it uses the type-tags,
and sends it to an oop type .

biop* oop:
*: (binary operations)
. the problem with the popular oop model
is that it is not helping much for biop's;
so, we should consider the binary parameter
to be one obj;
and, we again have oop working .
[8.13:
. this has always been implicit in my design;
biop oop works by featuring
2 type tags per arg: (supertype, subtype);
eg, (number, integer);
and then we don't have to worry about
where to send a (float, integer)
because they both have
the same supertype: number .
. this note was just pointing out
that I was realizing the syntax x`f -- vs f(x) --
was ok; whereas, previously
I had howled that oop was absurd because
it turned (x * y) into x`*(y)
as shorthand for asking x's type mgt
to apply the *-operation to (x,y);
what oop needs to be doing is (x,y)`*
as a shorthand for calling the type mgt that is
the nearest supertype shared by both x and y,
and asking it to apply the *-operation to (x,y).
8.15: and of coure,
oop langs like c++ need to get their own lang
and stop trying to fit within C,
so then we can go back to (x*y)
as a way to write (x,y)`* .]

2012-07-02

asynchronous communication and promises

6.15: adda/co/asynchronous communication and promises:

. the idea of protected types vs task types
 is about whether the interaction is using an entry queue .
[6.17:
. this reminded me of what promises are:
task types are taking orders on paper,
ie, instead of being called directly,
the callers put their call details in a queue
just like a consumer-waiter-cook system .
. the ada task is designed to make the consumer wait
(ie, do nothing until their order is done);
but this doesn't have to be the case;
because, instead of blocking the caller's entire process
we could put a block on some of its variables instead;
eg, say y`= f(x) is a task call,
so the caller must wait because
the rest of its program depends on y's value;
and, y's value depends on that task call, f, finishing .
. however, we could block access to y,
such that the caller can do other things
up until trying to access y .
]
. notice in ada's style
both protected and tasking types are blocking;
ie, synchronously communicating .
. how do we ask instead for
asynchronous communication ?

we don't need new syntax because
functions logically require blocking
whereas message-sends do not .
. by logically I mean
the function code is saying:
"( I am waiting for this return value), ...
[6.17: not true:
. asynchronous behaviour should be encoded in syntax
because it does make a difference at a high level:
. the only reason to feature asynchronous calling
is when the job or its shipping could take a long time;
if the job is quick
then there's no use bothering with multitasking,
and conversely, if the job is slow,
then multitasking is practically mandatory;
and, multitasking can be changing the caller's algorithm .
]
. there are 2 ways to ask for asynch:
#  pass the address of a promise-kept flag
that the caller can check to unblock a promised var;
# caller passes an email address
for sending a promise-kept notification:
the caller selects an event code
to represent a call being finished,
and that event code gets emailed to caller
when job is done .

. the usual protocol is to
have the called task email back to the caller
a pointer to the var that has been updated,
so if there are multiple targets,
the async'ly called might send an email for each completion,
or they could agree that when all is done
to email the name of the async sub that has finished;
but if caller wants to know which job was done;
then only the event code idea would be indicating
a per-call completion event .

6.26:  news.adda/co/perfbook:
Is Parallel Programming Hard, And, If So, What Can You Do About It?
Paul E. McKenney, December 16, 2011
Linux Technology Center, IBM Beaverton
paulmck@linux.vnet.ibm.com
2011 version:
src:
--
seen from here: cpu-and-gpu-trends-over-time
from: JeanBaptiste Poullet @jpoullet
Bioinformatician/statistician
, next generation sequencing (NGS),
NMR, Linux, Python, Perl, C/C++



2010-08-30

promise pipelining

8.21: news.adda/co/promises/wiki brings understanding:
. yahoo!, this wiki page finally made sense of promises
as exemplified by e-lang's tutorial
which graphically showed things incorrectly;
so that unless you ignored the diagram
you couldn't possibly make sense of the tutorial .
[8.30: ### the following is just my
version of that page, not a working tutorial ###]

t1 := x`a();
t2 := y`b();
t3 := t1`c(t2);
. "( x`a() ) means to send the message a()
asynchronously to x.
If, x, y, t1, and t2
are all located on the same remote machine,
a pipelined implementation can compute t3 with
one round-trip instead of three.
[. the original diagram showed all involved objects
existing on the client's (caller's) node,
not the remote server's;
so, you'd have to be left wondering
how is the claimed pipelining
possible for t1`c(t2)
if the temp's t1, and t2
are back at the caller's?! ]
Because all three messages are destined for
objects which are on the same remote machine,
only one request need be sent
and only one response
need be received containing the result.
. the actual message looks like:
do (remote`x`a) and save as t1;
do (remote`y`b) and save as t2;
do (t1`c(t2)) using previous saves;
and send it back .
Promise pipelining should be distinguished from
parallel asynchronous message passing.
In a system supporting parallel message passing
but not pipelining,
the messages x`a() and y`b()
in the above example could proceed in parallel,
but the send of t1`c(t2) would have to wait until
both t1 and t2 had been received,
even when x, y, t1, and t2 are on the same remote machine.
. Promise pipelining vs
pipelined message processing:
. in Actor systems,
it is possible for an actor to begin a message
before having completed
processing of the previous message.
[. this is the usual behavior for Ada tasks;
tasks are very general, and the designer of one
can make a task that does nothing more than
collect and sort all the messages that get queued;
and then even when it accepts a job,
it can subsequently requeue it .]