Physics and mathematical algorithm:
Which diagonalization algorithms are available?
For the time being, only two kinds of exact diagonalization algorithms are available: Householder algorithm for full diagonalization of relatively small hamiltonians and Lánczos algorithm. DiagHam has also been conceived to support Density Matrix Renormalization Group algorithm in a near future.
What is the largest Hilbert space dimension DiagHam can handle?
It depends essentially on the Hamiltonian you have to diagonalize and your computer resources. For example, we have been able to handle space of the order 10^8 for fractional quantum Hall effect.
Can I use results obtained with DiagHam in scientific publications?
Short answer: yes (without any restriction)
: the GPL allows no usage restriction, so you can include results obtained with DiagHam in a scientific publication without of course including DiagHam developers in the author list nor citing DiagHam. If you have done modifications, you don't have to diffuse them while you respect the GPL terms. Nevertheless, we strongly encourage you to announce us your publication so that we can add it to the publication list
. Of course, we would be pleased to add your modifications/contributions into the next DiagHam version.
DiagHam developers are physicists, not software engineers. So if one of them has developed a special physical system for you, this has to be seen as a scientific collaboration.
Can you develop code for my model?
You can always try to ask one of the DiagHam developers. If he has time and that the subject is one of his research interests, perhaps a scientific collaboration can be started.
Does DiagHam work on my (super)computer?
to have a list of all architecture/OS/compiler with which DiagHam has been tested
Does DiagHam support parallelization on shared memory architecture (i.e. multiprocessor computer)?
Yes. Your OS has to support POSIX threads. If you use one of the program bundled with DiagHam, most of them have an -S option to turn on parallelization on SMP computer and you can tune the number of used processors by using the --processors option.
Does DiagHam support parallelization on distributed memory architecture (i.e. cluster)?
DiagHam has a partial support for MPI. This support is used to distribute the Hamiltonian over multiple nodes (but not the eigenvectors).
How are you sure of the accuracy of DiagHam?
As most of the softwares, you're only sure that the programs work for the same configuration as the test bed (same problem with same inputs, same architecture, same OS...). That's why we indicate in the documentation of each build-in program of DiagHam if it reproduces results that can be found in the scientific literature or evaluated from other programs.
Due to the modular architecture of DiagHam, we have achieved tests on individual components when it was possible. Some components are also used for different programs whose results have been checked. In a near future, we will supply a database containing informations about the accuracy and known bugs for each component.
Does DiagHam use Lapack?
DiagHam has now the possibility to rely on the Lapack libraries for some matrix operations (in addition to DiagHam internal libraries). Further versions will surely offer improve usage of Lapack.
Bug reports and contributions:
How can I contribute to DiagHam?
You don't have to know C++ to contribute to DiagHam. You can use its built-in programs, reports bugs or problem, tell us if you have done a publication with it...
I've found a bug/problem in DiagHam, how can I report it?
You can report your bug to one of the DiagHam authors whose contact list is here
. Please include as much as information as possible (version number, architecture, OS, ...). If you have a patch to fix the problem, don't hesitate to send it.
I've done a class/program that can fit into DiagHam, are you interested in it?
Of course. You can contact
one of the DiagHam developers. Nevertheless, you need to provide enough documentations (using convention we already use) even if writing documentation is painful ;).