Showing posts with label Parasail. Show all posts
Showing posts with label Parasail. Show all posts

2013-12-25

Parython is a #ParaSail version based on #python

11.22: news.adda/co/Parython is a ParaSail version based on python:
. a recent ParaSail article mentions Parython, (sources)
more on that earlier? yes but just a mention:
Achieving parallelism and safety at the same time
by eliminating rather than adding features
has worked out better than we originally expected.
One of the lessons of the process has been that
just a small number of key ideas are sufficient to
achieve safe and easy parallelism.
Probably the biggest is the elimination of pointers,
with the substitution of expandable objects.
The other big one is the elimination of
direct access to global variables,
instead requiring that a variable be
passed as a (var or in out) parameter
if a function is going to update it.
Other important ones include
the elimination of parameter aliasing,
and the replacement of exception handling
with a combination of more
pre-condition checking at compile time
and a more task-friendly
event handling mechanism at run time.
So the question now is whether
some of these same key ideas
can be applied to existing languages,
to produce something with much the same look and feel
of the original, while moving toward a much more
parallel-, multicore-, human-friendly semantics.
news.adda/co/ParaSail at oopsla:
. parasail is featured at recent oopsla:
# bringing value semantics to parallel oop,
# parallel tutorial with 
decomposition and work-stealing  .

2013-03-09

exceptions ok unless requirements preclude

1.30: adda/cstr/exceptions/ok unless requirements preclude them:
. I thought Parasail's author explained somewhere
how exceptions really messed up multi-threads;
review my blog of that ...
(parasail-is-big-win-for-reliable).
. I decided a thread hang was no big deal;
in critical applications
exceptions are absolutely useless;
but if our point is to encourage programming
we should cater to all styles of thinking .
. we just need to protect the coder's user too:

2013-01-31

virtual C

1.11: adda/dstr/safe pointers/virtual C:
. adda's pointers can feature arithmetic exactly like c;
yet it can still remain safe because
the addresses are not absolute;
the pointers are actually actually just offsets .

2012-11-17

concurrency both expressed and automated

8.23: adda/co/finding inherent concurrency automatically:
. how can we analyze a program in order to find
what parts of it can be done concurrently?
. at each step there are implicit inputs and outputs
for step#1, do I know its outputs?
for step#2 do I know its inputs?
(also, when step#1 makes calls,
do any of those calls modify any space that might be
needed by anything step#2 calls? ).
. if step#2's inputs don't intersect step#1's outputs,
then you can do both at the same time .
. to know these things, calls have to be
functional so that program space is obvious .

. when calls have multiple parameters,
the order of their evaluation is random;
which should be tantamount to saying
that parameters in the same tuple
can be run concurrently (co-runnable) .
. this also applies to all tuples that are at
the same level of an expression tree:
eg, in h( f(a,b), g(c,d) ), a,b,c,d are co.runnable .

. a low-level way to compare
subprogramming with coprogramming,
is that while any call could be seen as
a new {process, cpu} activation,
in subprogramming, after making a call,
it agrees to stop working until that call is done .

. often we can find that a group of operations
are all handled by the same type mgt, [11.17:
-- in fact, that is the definition of "(operation):
a subprogram that is outputting
the same type that it's inputting -- ]
so even though they are nested
you can send the whle thing in one call,
and, in that case,
the type mgt will know what's co.runnable .

8.23: todo.adda/co/review parasail's design:
. the parasail lang was able to parallelize implicitly a lot .
(see also comments of this page).

adda/co/explicit concurrency control:

. the abstraction to use for explicit concurrency control
is having a particular cpu called like a function:
cpu#n(subprogram, inputs, outputs)
-- the inputs are a copy of everthing sent to it .
-- outputs are addresses to put results in .

. how is go.lang outputting?
when go.lang says go f(x)
they expect f to either call an outputter
or have x be an outputting channel .

. instead of saying cpu#n(...)
just say co f(x, out), or y`= co f(x) .

8.24: adda/co/syntax:
. there are 2 forms of the function co
according to whether co is synchronous or asynch:
# co(f,g,h) -- co with multiple arguments --
expects spawn and wait .
# co f -- co with just one argument --
just launches f asynchronously .

8.31: co.self/dream/adda/co:
. the dream was something about
"(ways of doing backup for concurrency or ...)
"( ... and now that we can do that,
we can do this ...) [dream recall is too fuzzy].


2012-08-31

parasail is big win for reliable concurrency

12.7.2: news.adda/co/ParaSail
(Parallel Spec and Impl Lang) is by Ada's Tucker Taft:

first siting of parasail:
11/09/11 Language Lessons: Where New
Parallel Developments Fit Into Your Toolkit

By John Moore for Intelligence In Software
The rise of multicore processors and programmable GPUs
has sparked a wave of developments in
parallel programming languages.
Developers seeking to exploit multicore and manycore systems
-- potentially thousands of processors --
now have more options at their disposal.
Parallel languages making moves of late
include the SEJITS of University of California, Berkeley;
The Khronos Group’s OpenCL;
the recently open-sourced Cilk Plus;
and the newly created ParaSail language.
Developers may encounter these languages directly,
though the wider community will most likely find them
embedded within higher-level languages.

. ParaSail incorporates formal methods such as
preconditions and post-conditions,
which are enforced by the compiler.
In another nod to secure, safety-critical systems,

2012-07-05

supercomputing programming languages

6.20: web.adda/supercomputing programming languages:
personal supercomputing:
The personal supercomputing idea
has also gained momentum thanks to the emergence of
programming languages for GPGPU
(general-purpose computing on GPU's).
. Nvidia has been trying to educate programmers
and build support for CUDA,
the C language programming environment
created specifically for programming GPUs.
Meanwhile, AMD has declared its support for
OpenCL (open computing language) in 2009 .
[6.25: web: the c in OpenCL stands for
computing not concurrency?

OpenCL (Open Computing Language) is the first
open, royalty-free standard for
general-purpose parallel programming of
heterogeneous systems.
OpenCL provides a uniform programming environment
for software developers to write efficient, portable code
for high-performance compute servers,
desktop computer systems and handheld devices
using a diverse mix of multi-core CPUs, GPUs,
Cell-type architectures
and other parallel processors such as DSPs.]
OpenCL is an industry standard programming language.
Nvidia says it also works with developers
to support OpenCL.
roll-your-own personal supercomputers:
. researchers at the University of Illinois were looking to
bypass the long waits for computer time at the
National Center for Supercomputing Applications;
so, they built “personal supercomputers,”
compact machines with a stack of graphics processors
that together can be used to run complex simulations.
. they have a quad-core Linux PC with 8GB of memory
and 3 GPUs (one NVIDIA Quadro FX 5800,
two NVIDIA Tesla C1060) each with 4GB .
any news on darpa's HPCS program
(High Productivity Computing Systems)?