matrix - How to send sparse vectors and matrices over ZeroMQ? -
i have matrix a
(how many stories start way?) sparse.
[ [0, 0, 0, 1.2, 0] [0, 0, 0, 0, 0] [3.5, 0, 0, 0, 0] [0 7, 0, 0, 0] ]
i want send variants of , forth between processes using zeromq. assume client , servers of different languages have no common serialization format. here tasks.
- create
a
. complicated needing send "frame" of matrix, here(4,5)
. - update
a[4,2]
7 6. - take sparse vector
v=[0,0,3.1,0,0]
, multiplya
, result back.
i've been told sending byte streams best solution, can't find examples between different libraries , in sparse format.
my default have python, c++ or chapel pairing if can speak those.
as you know, brian, zeromq not problem here
lets try re-wrap problem formulation first:
chapel features
use zmq;
providefrom zeromq point of view, not originator's side python, chapel ( or c++, you've mentioned above ) target environment decide on best choice of best serialisation strategy provider, de-ser have work inside particular target language implementation ( yes, zeromq best carry necessary payloads, byte-by-byte, nothing dangerous here, current state of
zmq
-module issue, being under review ), de-ser decides, once data have come in ( have been put in many of zeromq answers, zmq either deliver complete original message or none @ -- ceases dangerous no-go-strategy try move whole massive-matrix @ once ... ).the extreme care taken on sparse-matrix tools means, there rather need "communicate" sparse-matrix (re-)-representation, "send"-it ( if not due not available
[space]
on originator's node, due different representations of sparse-matrix content )
a concept of possible solution:
this said, choice create intelligent distributed agent-based system translation, allow target-environment ask originator's side ( massive sparse-matrix assembled ) start process of re-representation of massive sparse-matrix onto target-environment, such "replication-via-smart-communicated-content" become ready chapel in { matrix | sparsematrix }
type of such content re-representation, ready use linearalgebra;
simply forget hunt low-hanging fruits offered far, [space]
kill json-grown-in-size-re-wrapped-original-massive-matrix, 1st hardly fit on same node in-ram footprint , 2nd next crash zero-copy ( there copies of such data-peta-blob attempted placed o/s & kernel network buffers etc ) , that not fly.
Comments
Post a Comment