Showing posts with label asynchronous. Show all posts
Showing posts with label asynchronous. Show all posts

2012-11-17

concurrency both expressed and automated

8.23: adda/co/finding inherent concurrency automatically:
. how can we analyze a program in order to find
what parts of it can be done concurrently?
. at each step there are implicit inputs and outputs
for step#1, do I know its outputs?
for step#2 do I know its inputs?
(also, when step#1 makes calls,
do any of those calls modify any space that might be
needed by anything step#2 calls? ).
. if step#2's inputs don't intersect step#1's outputs,
then you can do both at the same time .
. to know these things, calls have to be
functional so that program space is obvious .

. when calls have multiple parameters,
the order of their evaluation is random;
which should be tantamount to saying
that parameters in the same tuple
can be run concurrently (co-runnable) .
. this also applies to all tuples that are at
the same level of an expression tree:
eg, in h( f(a,b), g(c,d) ), a,b,c,d are co.runnable .

. a low-level way to compare
subprogramming with coprogramming,
is that while any call could be seen as
a new {process, cpu} activation,
in subprogramming, after making a call,
it agrees to stop working until that call is done .

. often we can find that a group of operations
are all handled by the same type mgt, [11.17:
-- in fact, that is the definition of "(operation):
a subprogram that is outputting
the same type that it's inputting -- ]
so even though they are nested
you can send the whle thing in one call,
and, in that case,
the type mgt will know what's co.runnable .

8.23: todo.adda/co/review parasail's design:
. the parasail lang was able to parallelize implicitly a lot .
(see also comments of this page).

adda/co/explicit concurrency control:

. the abstraction to use for explicit concurrency control
is having a particular cpu called like a function:
cpu#n(subprogram, inputs, outputs)
-- the inputs are a copy of everthing sent to it .
-- outputs are addresses to put results in .

. how is go.lang outputting?
when go.lang says go f(x)
they expect f to either call an outputter
or have x be an outputting channel .

. instead of saying cpu#n(...)
just say co f(x, out), or y`= co f(x) .

8.24: adda/co/syntax:
. there are 2 forms of the function co
according to whether co is synchronous or asynch:
# co(f,g,h) -- co with multiple arguments --
expects spawn and wait .
# co f -- co with just one argument --
just launches f asynchronously .

8.31: co.self/dream/adda/co:
. the dream was something about
"(ways of doing backup for concurrency or ...)
"( ... and now that we can do that,
we can do this ...) [dream recall is too fuzzy].


2012-11-15

task.type's syntax and semantics

adda/co:
8.16: syntax:
. when an object is given a message,
it is the object's type mgt that applies a method;
and, these type mgr's are tasks (co.programs);
but how do we get the obj's themselves
to be tasks? (ie, running code on a separate thread).

2012-08-18

one call for trees of operations

7.22: addm/one call for trees of operations:

. the key to efficiency will be
providing support for integrated use of
both oop and concrete types;
ie, if the tree needs to be done fast
or in a tight space,
then the compiler uses native types;
but if it needs to handle polymorphism,
then it uses the type-tags,
and sends it to an oop type .

biop* oop:
*: (binary operations)
. the problem with the popular oop model
is that it is not helping much for biop's;
so, we should consider the binary parameter
to be one obj;
and, we again have oop working .
[8.13:
. this has always been implicit in my design;
biop oop works by featuring
2 type tags per arg: (supertype, subtype);
eg, (number, integer);
and then we don't have to worry about
where to send a (float, integer)
because they both have
the same supertype: number .
. this note was just pointing out
that I was realizing the syntax x`f -- vs f(x) --
was ok; whereas, previously
I had howled that oop was absurd because
it turned (x * y) into x`*(y)
as shorthand for asking x's type mgt
to apply the *-operation to (x,y);
what oop needs to be doing is (x,y)`*
as a shorthand for calling the type mgt that is
the nearest supertype shared by both x and y,
and asking it to apply the *-operation to (x,y).
8.15: and of coure,
oop langs like c++ need to get their own lang
and stop trying to fit within C,
so then we can go back to (x*y)
as a way to write (x,y)`* .]

2012-07-02

asynchronous communication and promises

6.15: adda/co/asynchronous communication and promises:

. the idea of protected types vs task types
 is about whether the interaction is using an entry queue .
[6.17:
. this reminded me of what promises are:
task types are taking orders on paper,
ie, instead of being called directly,
the callers put their call details in a queue
just like a consumer-waiter-cook system .
. the ada task is designed to make the consumer wait
(ie, do nothing until their order is done);
but this doesn't have to be the case;
because, instead of blocking the caller's entire process
we could put a block on some of its variables instead;
eg, say y`= f(x) is a task call,
so the caller must wait because
the rest of its program depends on y's value;
and, y's value depends on that task call, f, finishing .
. however, we could block access to y,
such that the caller can do other things
up until trying to access y .
]
. notice in ada's style
both protected and tasking types are blocking;
ie, synchronously communicating .
. how do we ask instead for
asynchronous communication ?

we don't need new syntax because
functions logically require blocking
whereas message-sends do not .
. by logically I mean
the function code is saying:
"( I am waiting for this return value), ...
[6.17: not true:
. asynchronous behaviour should be encoded in syntax
because it does make a difference at a high level:
. the only reason to feature asynchronous calling
is when the job or its shipping could take a long time;
if the job is quick
then there's no use bothering with multitasking,
and conversely, if the job is slow,
then multitasking is practically mandatory;
and, multitasking can be changing the caller's algorithm .
]
. there are 2 ways to ask for asynch:
#  pass the address of a promise-kept flag
that the caller can check to unblock a promised var;
# caller passes an email address
for sending a promise-kept notification:
the caller selects an event code
to represent a call being finished,
and that event code gets emailed to caller
when job is done .

. the usual protocol is to
have the called task email back to the caller
a pointer to the var that has been updated,
so if there are multiple targets,
the async'ly called might send an email for each completion,
or they could agree that when all is done
to email the name of the async sub that has finished;
but if caller wants to know which job was done;
then only the event code idea would be indicating
a per-call completion event .

6.26:  news.adda/co/perfbook:
Is Parallel Programming Hard, And, If So, What Can You Do About It?
Paul E. McKenney, December 16, 2011
Linux Technology Center, IBM Beaverton
paulmck@linux.vnet.ibm.com
2011 version:
src:
--
seen from here: cpu-and-gpu-trends-over-time
from: JeanBaptiste Poullet @jpoullet
Bioinformatician/statistician
, next generation sequencing (NGS),
NMR, Linux, Python, Perl, C/C++