Showing posts with label GPGPU. Show all posts
Showing posts with label GPGPU. Show all posts

2023-04-20

WebGPU: a significant web technology

2023.4.20: news.cyb/dev.net/WebGPU/
introducing a significant web technology:

. this is a revision of WebGPU news from

François Beaufort, Corentin Wallez April 6, 2023

https://developer.chrome.com/blog/webgpu-release/

2012-11-17

walk and chew gum #Python

8.22: news.adda/python/co/walk and chew gum:
summary:
Google discourages Python for new projects?
[they encourage go.lang;
Python is said to need more resources?
I'm sure the problem is backwards compatibility .
. in the paper where they talked about Unladen Swallow
they mentioned wanting to remove the GIL
(the global interpreter lock for thread safety)
and this really surprised me coming from google staff,
because Python's project lead (another google staff)
has argued convincingly
that due to the language itself, [11.17:
ie, by being designed for interfacing C
(Jython and IronPython have no GIL)]
removing the GIL from CPython was impractical .
[11.17: at PyCon 2012, he adds:
Threading is for parallel IO.
Multiprocessing is for parallel computation.
The GIL does not hinder any of that.
... just because process creation in Windows
used to be slow as a dog, ...]
. Python has that old-style OOP,
which doesn't do much for encapsulation,
and therefore doesn't do much for thread safety
except to single-thread the interpreter .
. if you want something much like python
that also has good concurrency,
then google has given it to us in go.lang;
but, perhaps what he meant to say
is that it's like Intel's CISC architecture:
that was fundamentally slower than RISC;
but they virtualized it:
the machine instructions are converted
to RISC microcode .
. that's what big money could do for Python:
automatically find the inherent concurrency
and translate it to a threaded virtual machine .[11.17:
... but still interface flawlessly with
all C code on all platforms?
I'm no longer confident about that .]

[11.17: web:
. to overcome GIL limitation,
the parallel python SMP module;
runs python code in parallel
on both multicores and multiprocessors .
Doug Hellmann 2007 reviews it:
. you need install your code only once:
the code and data are both auto'distributed
from the central server to the worker nodes .
Jobs are started asynchronously,
and run in parallel on an available node.
The callable object that is
returned when the job is submitted
blocks until the response is ready,
so response sets can be computed
asynchronously, then merged synchronously.
Load distribution is transparent,
making it excellent for clustered environments.
Whereas Parallel Python is designed around
a “push” style distribution model,
the Processing package is set up to
create producer/consumer-style systems
where worker processes pull jobs from a queue .
Since the Processing package is almost a
drop-in replacement for the
standard library’s threading module,
many of your existing multi-threaded applications
can be converted to use processes
simply by changing a few import statements .
. see wiki.python.org/moin/Concurrency/
for the latest on Pythonic concurrency .]

8.22: news:

2012-09-22

hello to founder of diotavelli.net's PyQt wiki

7.11: web.co.cyb/dev/who does diotavelli.net?:
. the supporter of the pyqt wiki
also has a page on gpgpu .
. other details of his coding career:
studying Computational Linguistics
    2003‒2006: Tübingen (BA)
    2007‒2009: Saarbrücken (M.Sc.)
. here's his papers .

at Google ZRH:
"( Computational Linguistics means
the theoretical foundations
and wonderful world of algorithms .
eg, writing small toolkits for finite state automata. )
what does his toolkit do?
"(    Determinization of NFSAs to DFSAs
    Creation of DFAs from simple regular expressions
    Application of FSTs
    dot graph output for *FSAs
-- if you like PyParsing
and generator-expression-prone Python code,
you might want to have look at
the TinyFST code.)
salsa project page @ coli.uni-saarland.de:
"( One of the most urgent problems in language technology
is the lexical semantics bottleneck,
the unavailability of domain-independent lexica
with rich semantic information on lexical items.
Such lexica could greatly improve
the quality of current applications.
At the same time, providing
large-scale lexical semantic information
is an enormous challenge,
due to the size of the vocabulary
and the inherent vagueness of lexical meaning.
Our aims are:
    Providing a large lexical semantic lexicon
    providing semantic and syntactic properties for
    German lexical items,
    to serve as a rich static database.
    Developing techniques for the wide-coverage statistics-based
    semantic annotation of texts.
    Investigating the use of contiguous frame annotations
    for dynamic semantic analysis
    in practical natural language processing applications).
The TreeAligner Project
"( The TreeAligner is a tool for annotating and browsing
correspondences between elements of syntactical trees.
It can be used for creating paralell treebanks.
It also includes a powerful search function.)
. his latest blog entry for dev is
2009/py for gnome
-- I don't see any interest in pyqt at all
except for knowing about his wiki .

Scrollable Widgets with PyGTK 2009:
. the code for that project is online at
www.cl.uzh.ch/kitt ... gtktreeview.py#l199 .
-- © 2007-2009 The TreeAligner Team, at ifi.uzh.ch,
GNU GPLv2 . import gobject, gtk, cairo ...

2012-09-21

why no open drivers for multi-core

7.13: adds/openware/
why open drivers are impractical for multi-core:

. it's ironic that the reason openware proponents
wanted to design the drivers for gpu's
is from assuming that
close-source drivers would only be a wall
between good programmers and good hardware:
if you are a chip maker,
and you insist on providing drivers,
the likely reason is that you are
unable to provide consistently good chips
but what you can do is
hard-code more cores than you'll need,
and then have the drivers sort out the good ones,
just as a harddrive format sorts out bad sectors .
7.16: overheating:
. if they provide more cores and all are usable,
and the user is using all the extras
then could the unit overheat?
7.17: appearance:
. perhaps it would ease pricing structure,
if they promised a constant number of cores,
and then always delivered only that number .


2012-08-31

parasail is big win for reliable concurrency

12.7.2: news.adda/co/ParaSail
(Parallel Spec and Impl Lang) is by Ada's Tucker Taft:

first siting of parasail:
11/09/11 Language Lessons: Where New
Parallel Developments Fit Into Your Toolkit

By John Moore for Intelligence In Software
The rise of multicore processors and programmable GPUs
has sparked a wave of developments in
parallel programming languages.
Developers seeking to exploit multicore and manycore systems
-- potentially thousands of processors --
now have more options at their disposal.
Parallel languages making moves of late
include the SEJITS of University of California, Berkeley;
The Khronos Group’s OpenCL;
the recently open-sourced Cilk Plus;
and the newly created ParaSail language.
Developers may encounter these languages directly,
though the wider community will most likely find them
embedded within higher-level languages.

. ParaSail incorporates formal methods such as
preconditions and post-conditions,
which are enforced by the compiler.
In another nod to secure, safety-critical systems,

2012-07-05

supercomputing programming languages

6.20: web.adda/supercomputing programming languages:
personal supercomputing:
The personal supercomputing idea
has also gained momentum thanks to the emergence of
programming languages for GPGPU
(general-purpose computing on GPU's).
. Nvidia has been trying to educate programmers
and build support for CUDA,
the C language programming environment
created specifically for programming GPUs.
Meanwhile, AMD has declared its support for
OpenCL (open computing language) in 2009 .
[6.25: web: the c in OpenCL stands for
computing not concurrency?

OpenCL (Open Computing Language) is the first
open, royalty-free standard for
general-purpose parallel programming of
heterogeneous systems.
OpenCL provides a uniform programming environment
for software developers to write efficient, portable code
for high-performance compute servers,
desktop computer systems and handheld devices
using a diverse mix of multi-core CPUs, GPUs,
Cell-type architectures
and other parallel processors such as DSPs.]
OpenCL is an industry standard programming language.
Nvidia says it also works with developers
to support OpenCL.
roll-your-own personal supercomputers:
. researchers at the University of Illinois were looking to
bypass the long waits for computer time at the
National Center for Supercomputing Applications;
so, they built “personal supercomputers,”
compact machines with a stack of graphics processors
that together can be used to run complex simulations.
. they have a quad-core Linux PC with 8GB of memory
and 3 GPUs (one NVIDIA Quadro FX 5800,
two NVIDIA Tesla C1060) each with 4GB .
any news on darpa's HPCS program
(High Productivity Computing Systems)?