Homepage of Michele MartoneHi. I work as a High Performance Computing expert at the High Performance Systems Division of the LRZ.
My duties include large-scale code restructuring, OpenMP / MPI parallelization of scientific codes, sparse matrix computations. In addition I give occasional trainings or talks on various topics in this domain.
I am author of a few Free / Libre / Open Source Software (FLOSS) packages, developed some for work, some for fun.
On this page you find some information related to my activities in the above fields.
If you are new to these topics then consult the Frequently Asked Questions section.
For variety, there is an option of changing the color scheme from [bright on dark] to [dark on bright, mild colors], or [dark on bright].
Talks and Publications
"Introduction to Semantic Patching of C programs with Coccinelle"
training at LRZ, Garching, Germany 2020.03.24 --- (cancelled due to COVID-19 pandemics)
"Through the Steps of Programmable Refactoring of a Large Scientific Code"
( invited talk at Verona University, Verona, Italy 2019.11.25 )
"Introduction to Semantic Patching of C programs with Coccinelle",
( training at LRZ, Garching, Germany 2019.10.08 )
"Restructuring Scientific Software using Semantic Patching with Coccinelle",
( workshop talk at de-RSE 2019, "Research Software Engineers Association Germany Conference", Potsdam, Germany, 2019.06.04 )
"Introduction to Version Control with SVN and GIT",
( training at LRZ, Garching, Germany, 2019.03.01 )
"Automating Data Layout Conversion in a Large Cosmological Simulation Code",
( poster at COSAS'18, "International Symposium on Computational Science at Scale", Erlangen, Germany, 2018.09.07 )
"Interfacing Epetra to the RSB sparse matrix format for shared-memory performance",
( talk at the 5th "European Trilinos User Group Meeting", Garching, 2016.04.19 )
"Auto-tuning shared memory parallel Sparse BLAS operations in librsb-1.2",
( poster at the workshop EXASCALE15, "Sparse Solvers for Exascale: From Building Blocks to Applications", Greifswald, Germany , 2015.03.23-25 )
- Reports from my past work in the EFDA HLST ("European Fusion Development Agreement", High Level Support Team) at the Max Planck Institute for Plasma Physics (IPP) in Garching, Germany
"Efficient Multithreaded Untransposed, Transposed or Symmetric Sparse Matrix-Vector Multiplication with the Recursive Sparse Blocks Format",
(also on PuRe)
( journal article appeared in Parallel Computing 40(7): 251-270 (2014) )
- Other past conference proceedings are listed in the Computer Science Bibliography (DBLP)
- The librsb library for fast shared memory sparse matrix computations in C/C++/Fortran
- The SparseRSB package for interactive sparse matrix computations under GNU/Octave
- The PyRSB package to call librsb via the Python scripting language
- The FIM: Fbi IMproved ASCII / X / SDL / Framebuffer Linux image viewer
michele dot martone at lrz dot defor work stuff,
michelemartone at users dot sorceforge dot netfor free software stuff.
Please consider using email encryption with
when sending email to me.
Here and here some reasons for using PGP encryption in email or here for a good tutorial.
And here in the box below an example of how easy using PGP via GPG in the BASH shell:
# All the following can be pasted in a terminal running the BASH shell. # Lines beginning with # (like this) are comments. # The others are commands. # # import the recipient public key: gpg --search 0xe0e669c8ef1258b8 # Output may vary; the following key is the one you want to import when prompted by gpg: # ... # 1024 bit DSA key 1DBB555AEA359B8AAF0C6B88E0E669C8EF1258B8, created: 2005-06-26 #Keys 1-1 of 1 for "0xe0e669c8ef1258b8". Enter number(s), N)ext, or Q)uit > 1 #gpg: requesting key EF1258B8 from hkp server keys.gnupg.net # create a sample file: date > file.txt # encipher it for the recipient of the public key: gpg -r 0xe0e669c8ef1258b8 --encrypt file.txt # the ciphered file named file.txt.gpg is ready to be sent via email, diskette, mule or pigeon. # after having encrypted a file for me you may want to sign it # (signing proves that you produced that file) gpg -sbav file.txt.gpg # now you may want to send the file.txt.gpg.asc signature file as well # verify file.txt.gpg and signature file.txt.gpg.asc you received from me: gpg --verify file.txt.gpg.asc # These keys are also to be obtained online: # http://keys.gnupg.net/pks/lookup?op=vindex&fingerprint=on&search=0xE0E669C8EF1258B8
Q: Why is this homepage so ugly?
A: It was never thought to be beautiful. However, it was thought to be informative. If it's not the case, then feel free to send me your suggestion for improvements. If you have viable suggestions in making it more beautiful, please send me those, too.
Q: Why do you write free software at work?
A: Lately I've been working at the service of science. With public (taxpayer's) money. You'll agree that having public code out of public money is a good return in terms of transparency. That, to say the least. Please consult the Free Software Foundation web pages for many more and well written motivations.
Q: Why do you write free software in your free time?
A: Sharing is caring, and there's no fun playing alone.
No, seriously, check out the Free Software Foundation web page for that, because it's a combination of reasons.
Q: What do you mean by
A: By Sparse Matrices I mean matrices (that is, the mathematical concept of tables of numbers, usually found in numerical linear algebra), containing relatively many more zero values than non-zero values. In computers, it is often better to represent such matrices in a sparse form (that is, a list form, e.g.
[[a11,a22], [1,2], [1,2]]for a 2 by 2 matrix having
(a11,a22)on the diagonal and the rest zero values). A dense (or, full) representation (tabular, with all the zero values explicitly represented, e.g.
[a11,0; 0,a22;]) might not be good for such matrices: it might be impossible (matrix may not fit in a computer's memory), or it can slow down down the computation excessively.
Q: What are
High Performance Computing,
A: You can think of High Performance Computing (HPC) as the art and technology around computers that are the fastest at a given time (usually so called supercomputers). OpenMP (Open Multi Processing) and MPI (Message Passing Interface) are programming standards commonly used in HPC.
In memoriam Silvio Gori:
Solei ka chofe _ _ / __ __ / \ / / \ \ o< __ \ | / __ \ 0___ / _ \\w// \ ^ \ / / \w/^ \ / / OV`0 \ / V V | V@ |\ V o/~ | \ W _U__U_ B | \ __W_____T_____L o |--o\ -- / \ \----------/\ ~~~~~~~~~~~~~~^~~~^~~~~~~^^^^^~~^~~~
________________________________________ impressum disclaimer ________________________________________ \ \ ,__, | | (oo)\| |___ (__)\| | )\_ | |_w | \ | | || *
$LastChangedDate: 2020-05-25 14:14:35 +0200 (Mon, 25 May 2020) $