2023.4.20: news.cyb/dev.net/WebGPU/
introducing a significant web technology:
. this is a revision of WebGPU news from
François Beaufort, Corentin Wallez April 6, 2023
. real opportunity starts with real documentation .
2023.4.20: news.cyb/dev.net/WebGPU/
introducing a significant web technology:
. this is a revision of WebGPU news from
François Beaufort, Corentin Wallez April 6, 2023
Threading is for parallel IO.. Python has that old-style OOP,
Multiprocessing is for parallel computation.
The GIL does not hinder any of that.
... just because process creation in Windows
used to be slow as a dog, ...]
. you need install your code only once:. see wiki.python.org/moin/Concurrency/
the code and data are both auto'distributed
from the central server to the worker nodes .
Jobs are started asynchronously,
and run in parallel on an available node.
The callable object that is
returned when the job is submitted
blocks until the response is ready,
so response sets can be computed
asynchronously, then merged synchronously.
Load distribution is transparent,
making it excellent for clustered environments.
Whereas Parallel Python is designed around
a “push” style distribution model,
the Processing package is set up to
create producer/consumer-style systems
where worker processes pull jobs from a queue .
Since the Processing package is almost a
drop-in replacement for the
standard library’s threading module,
many of your existing multi-threaded applications
can be converted to use processes
simply by changing a few import statements .
"( Computational Linguistics meanswhat does his toolkit do?
the theoretical foundations
and wonderful world of algorithms .
eg, writing small toolkits for finite state automata. )
"( Determinization of NFSAs to DFSAssalsa project page @ coli.uni-saarland.de:
Creation of DFAs from simple regular expressions
Application of FSTs
dot graph output for *FSAs
-- if you like PyParsing
and generator-expression-prone Python code,
you might want to have look at
the TinyFST code.)
"( One of the most urgent problems in language technologyThe TreeAligner Project
is the lexical semantics bottleneck,
the unavailability of domain-independent lexica
with rich semantic information on lexical items.
Such lexica could greatly improve
the quality of current applications.
At the same time, providing
large-scale lexical semantic information
is an enormous challenge,
due to the size of the vocabulary
and the inherent vagueness of lexical meaning.
Our aims are:
Providing a large lexical semantic lexicon
providing semantic and syntactic properties for
German lexical items,
to serve as a rich static database.
Developing techniques for the wide-coverage statistics-based
semantic annotation of texts.
Investigating the use of contiguous frame annotations
for dynamic semantic analysis
in practical natural language processing applications).
"( The TreeAligner is a tool for annotating and browsing. his latest blog entry for dev is
correspondences between elements of syntactical trees.
It can be used for creating paralell treebanks.
It also includes a powerful search function.)
By John Moore for Intelligence In Software
The rise of multicore processors and programmable GPUs
has sparked a wave of developments in
parallel programming languages.
Developers seeking to exploit multicore and manycore systems
-- potentially thousands of processors --
now have more options at their disposal.
Parallel languages making moves of late
include the SEJITS of University of California, Berkeley;
The Khronos Group’s OpenCL;
the recently open-sourced Cilk Plus;
and the newly created ParaSail language.
Developers may encounter these languages directly,
though the wider community will most likely find them
embedded within higher-level languages.
. ParaSail incorporates formal methods such as
preconditions and post-conditions,
which are enforced by the compiler.
In another nod to secure, safety-critical systems,
The personal supercomputing idea[6.25: web: the c in OpenCL stands for
has also gained momentum thanks to the emergence of
programming languages for GPGPU
(general-purpose computing on GPU's).
. Nvidia has been trying to educate programmers
and build support for CUDA,
the C language programming environment
created specifically for programming GPUs.
Meanwhile, AMD has declared its support for
OpenCL (open computing language) in 2009 .
OpenCL (Open Computing Language) is the firstroll-your-own personal supercomputers:
open, royalty-free standard for
general-purpose parallel programming of
heterogeneous systems.
OpenCL provides a uniform programming environment
for software developers to write efficient, portable code
for high-performance compute servers,
desktop computer systems and handheld devices
using a diverse mix of multi-core CPUs, GPUs,
Cell-type architectures
and other parallel processors such as DSPs.]
OpenCL is an industry standard programming language.
Nvidia says it also works with developers
to support OpenCL.
. researchers at the University of Illinois were looking toany news on darpa's HPCS program
bypass the long waits for computer time at the
National Center for Supercomputing Applications;
so, they built “personal supercomputers,”
compact machines with a stack of graphics processors
that together can be used to run complex simulations.
. they have a quad-core Linux PC with 8GB of memory
and 3 GPUs (one NVIDIA Quadro FX 5800,
two NVIDIA Tesla C1060) each with 4GB .