Concepts
I only need binding, or the number of cores, why should I use hwloc ?
hwloc is its portable API that works on a variety of operating systems. It supports binding of threads, processes and memory buffers (see CPU binding and Memory binding). Even if some features are not supported on some systems, using hwloc is much easier than reimplementing your own portability layer.
Moreover, hwloc provides knowledge of cores and hardware threads. It offers easy ways to bind tasks to individual hardware threads, or to entire multithreaded cores, etc. See How may I ignore symmetric multithreading, hyper-threading, etc. in hwloc?. Most alternative software for binding do not even know whether each core is single-threaded, multithreaded or hyper-threaded. They would bind to individual threads without any way to know whether multiple tasks are in the same physical core.
However, using hwloc comes with an overhead since a topology must be loaded before gathering information and binding tasks or memory. Fortunately this overhead may be significantly reduced by filtering non-interesting information out of the topology, see What may I disable to make hwloc faster? below.
What may I disable to make hwloc faster?
Building a hwloc topology on a large machine may be slow because the discovery of hundreds of hardware cores or threads takes time (especially when reading thousands of sysfs files on Linux). Ignoring some objects (for instance caches) that aren't useful to the current application may improve this overhead. One should also consider using XML (see I do not want hwloc to rediscover my enormous machine topology every time I rerun a process) to work around such issues.
Contrary to lstopo which enables most features (see Why is lstopo slow?), the default hwloc configuration is to keep all objects enabled except I/Os and instruction caches. This usually builds a very precise view of the CPU and memory subsystems, which may be reduced if some information is unneeded.
The following code tells hwloc to build a much smaller topology that only contains Cores (explicitly filtered-in below), hardware threads (PUs, cannot be filtered-out), NUMA nodes (cannot be filtered-out), and the root object (usually a Machine; the root cannot be removed without breaking the tree):
hwloc_topology_t topology;
hwloc_topology_init(&topology);
/* filter everything out */
hwloc_topology_set_all_types_filter(topology, HWLOC_TYPE_FILTER_KEEP_NONE);
/* filter Cores back in */
hwloc_topology_set_type_filter(topology, HWLOC_OBJ_CORE, HWLOC_TYPE_FILTER_KEEP_ALL);
hwloc_topology_load(topology);
However, one should remember that filtering such objects out removes locality information from the hwloc tree. For instance, we may not know anymore which PU is close to which NUMA node. This would be useful to applications that explicitly want to place specific memory buffers close to specific tasks. To ignore useless objects but keep those that bring locality/hierarchy information, applications may replace HWLOC_TYPE_FILTER_KEEP_NONE with HWLOC_TYPE_FILTER_KEEP_STRUCTURE above.
Starting with hwloc 2.8, it is also possible to ignore distances between objects, memory performance attributes, and kinds of CPU cores, by setting topology flags before load:
[...]
/* disable distances, memory attributes and CPU kinds */
hwloc_topology_set_flags(topology, HWLOC_TOPOLOGY_FLAG_NO_DISTANCES
|HWLOC_TOPOLOGY_FLAG_NO_MEMATTRS
|HWLOC_TOPOLOGY_FLAG_NO_CPUKINDS);
[...]
hwloc_topology_load(topology);
Finally it is possible to prevent some hwloc components from being loaded and queried. If you are sure that the Linux (or x86) component is enough to discover everything you need, you may ask hwloc to disable all other components by setting something like HWLOC_COMPONENTS=linux,stop
in the environment. See Components and plugins for details.
Should I use logical or physical/OS indexes? and how?
One of the original reasons why hwloc was created is that physical/OS indexes (obj->os_index
) are often crazy and unpredictable: processors numbers are usually non-contiguous (processors 0 and 1 are not physically close), they vary from one machine to another, and may even change after a BIOS or system update. These numbers make task placement hardly portable. Moreover some objects have no physical/OS numbers (caches), and some objects have non-unique numbers (core numbers are only unique within a socket). Physical/OS indexes are only guaranteed to exist and be unique for PU and NUMA nodes.
hwloc therefore introduces logical indexes (obj->logical_index
) which are portable, contiguous and logically ordered (based on the resource organization in the locality tree). In general, one should only use logical indexes and just let hwloc do the internal conversion when really needed (when talking to the OS and hardware).
hwloc developers recommends that users do not use physical/OS indexes unless they really know what they are doing. The main reason for still using physical/OS indexes is when interacting with non-hwloc tools such as numactl or taskset, or when reading hardware information from raw sources such as /proc/cpuinfo.
lstopo options -l
and -p
may be used to switch between logical indexes (prefixed with L#
) and physical/OS indexes (P#
). Converting one into the other may also be achieved with hwloc-calc which may manipulate either logical or physical indexes as input or output. See also hwloc-calc.
# Convert PU with physical number 3 into logical number
$ hwloc-calc -I pu --physical-input --logical-output pu:3
5
# Convert a set of NUMA nodes from logical to physical
# (beware that the output order may not match the input order)
$ hwloc-calc -I numa --logical-input --physical-output numa:2-3 numa:7
0,2,5
hwloc is only a structural model, it ignores performance models, memory bandwidth, etc.?
hwloc is indeed designed to provide applications with a structural model of the platform. This is an orthogonal approach to describing the machine with performance models, for instance using memory bandwidth or latencies measured by benchmarks. We believe that both approaches are important for helping application make the most of the hardware.
For instance, on a dual-processor host with four cores each, hwloc clearly shows which four cores are together. Latencies between all pairs of cores of the same processor are likely identical, and also likely lower than the latency between cores of different processors. However, the structural model cannot guarantee such implementation details. On the other side, performance models would reveal such details without always clearly identifying which cores are in the same processor.
The focus of hwloc is mainly of the structural modeling side. However, hwloc lets user adds performance information to the topology through distances (see Distances), memory attributes (see Memory Attributes) or even custom annotations (see How do I annotate the topology with private notes?). hwloc may also use such distance information for grouping objects together (see hwloc only has a one-dimensional view of the architecture, it ignores distances? and What are these Group objects in my topology?).
hwloc only has a one-dimensional view of the architecture, it ignores distances?
hwloc places all objects in a tree. Each level is a one-dimensional view of a set of similar objects. All children of the same object (siblings) are assumed to be equally interconnected (same distance between any of them), while the distance between children of different objects (cousins) is supposed to be larger.
Modern machines exhibit complex hardware interconnects, so this tree may miss some information about the actual physical distances between objects. The hwloc topology may therefore be annotated with distance information that may be used to build a more realistic representation (multi-dimensional) of each level. For instance, there can be a distance matrix that representing the latencies between any pair of NUMA nodes if the BIOS and/or operating system reports them.
For more information about the hwloc distances, see Distances.
What are these Group objects in my topology?
hwloc comes with a set of predefined object types (Core, Package, NUMA node, Caches) that match the vast majority of hardware platforms. The HWLOC_OBJ_GROUP type was designed for cases where this set is not sufficient. Groups may be used anywhere to add more structure information to the topology, for instance to show that 2 out of 4 NUMA nodes are actually closer than the others. When applicable, the subtype
field describes why a Group was actually added (see also Normal attributes).
hwloc currently uses Groups for the following reasons:
-
NUMA parents when memory locality does not match any existing object.
-
I/O parents when I/O locality does not match any existing object.
-
Distance-based groups made of close objects.
-
AMD Core Complex (CCX) (
subtype
is Complex
, in the x86 backend), but these objects are usually merged with the L3 caches or Dies.
-
AMD Bulldozer dual-core compute units (
subtype
is ComputeUnit
, in the x86 backend), but these objects are usually merged with the L2 caches.
-
Intel Extended Topology Enumeration levels (in the x86 backend).
-
Windows processor groups when HWLOC_WINDOWS_PROCESSOR_GROUP_OBJS=1 is set in the environment (except if they contain exactly a single NUMA node, or a single Package, etc.).
-
IBM S/390 "Books" on Linux (
subtype
is Book
).
-
Linux Clusters of CPUs (
subtype
is Cluster
), for instance for ARM cores sharing of some internal cache or bus, or x86 cores sharing a L2 cache (since Linux kernel 5.16). HWLOC_DONT_MERGE_CLUSTER_GROUPS=1
may be set in the environment to disable the automerging of these groups with identical caches, etc.
-
AIX unknown hierarchy levels.
hwloc Groups are only kept if no other object has the same locality information. It means that a Group containing a single child is merged into that child. And a Group is merged into its parent if it is its only child. For instance a Windows processor group containing a single NUMA node would be merged with that NUMA node since it already contains the relevant hierarchy information.
When inserting a custom Group with hwloc_hwloc_topology_insert_group_object(), this merging may be disabled by setting its dont_merge
attribute.
What happens if my topology is asymmetric?
hwloc supports asymmetric topologies even if most platforms are usually symmetric. For example, there could be different types of processors in a single machine, each with different numbers of cores, symmetric multithreading, or levels of caches.
In practice, asymmetric topologies are rare but occur for at least two reasons:
-
Intermediate groups may added for I/O affinity: on a 4-package machine, an I/O bus may be connected to 2 packages. These packages are below an additional Group object, while the other packages are not (see also What are these Group objects in my topology?).
-
If only part of a node is available to the current process, for instance because the resource manager uses Linux Cgroups to restrict process resources, some cores (or NUMA nodes) will disappear from the topology (unless flag HWLOC_TOPOLOGY_FLAG_INCLUDE_DISALLOWED was passed). On a 32-core machine where 12 cores were allocated to the process, this may lead to one CPU package with 8 cores, another one with only 4 cores, and two missing packages.
To understand how hwloc manages such cases, one should first remember the meaning of levels and cousin objects. All objects of the same type are gathered as horizontal levels with a given depth. They are also connected through the cousin pointers of the hwloc_obj structure. Object attribute (cache depth and type, group depth) are also taken in account when gathering objects as horizontal levels. To be clear: there will be one level for L1i caches, another level for L1d caches, another one for L2, etc.
If the topology is asymmetric (e.g., if a group is missing above some processors), a given horizontal level will still exist if there exist any objects of that type. However, some branches of the overall tree may not have an object located in that horizontal level. Note that this specific hole within one horizontal level does not imply anything for other levels. All objects of the same type are gathered in horizontal levels even if their parents or children have different depths and types.
See the diagram in Terms and Definitions for a graphical representation of such topologies.
Moreover, it is important to understand that a same parent object may have children of different types (and therefore, different depths). These children are therefore siblings (because they have the same parent), but they are not cousins (because they do not belong to the same horizontal level).
What happens to my topology if I disable symmetric multithreading, hyper-threading, etc. in the system?
hwloc creates one PU (processing unit) object per hardware thread. If your machine supports symmetric multithreading, for instance Hyper-Threading, each Core object may contain multiple PU objects:
$ lstopo -
...
Core L#0
PU L#0 (P#0)
PU L#1 (P#2)
Core L#1
PU L#2 (P#1)
PU L#3 (P#3)
x86 machines usually offer the ability to disable hyper-threading in the BIOS. Or it can be disabled on the Linux kernel command-line at boot time, or later by writing in sysfs virtual files.
If you do so, the hwloc topology structure does not significantly change, but some PU objects will not appear anymore. No level will disappear, you will see the same number of Core objects, but each of them will contain a single PU now. The PU level does not disappear either (remember that hwloc topologies always contain a PU level at the bottom of the topology) even if there is a single PU object per Core parent.
$ lstopo -
...
Core L#0
PU L#0 (P#0)
Core L#1
PU L#1 (P#1)
How may I ignore symmetric multithreading, hyper-threading, etc. in hwloc?
First, see What happens to my topology if I disable symmetric multithreading, hyper-threading, etc. in the system? for more information about multithreading.
If you need to ignore symmetric multithreading in software, you should likely manipulate hwloc Core objects directly:
/* get the number of cores */
unsigned nbcores = hwloc_get_nbobjs_by_type(topology, HWLOC_OBJ_CORE);
...
/* get the third core below the first package */
hwloc_obj_t package, core;
package = hwloc_get_obj_by_type(topology, HWLOC_OBJ_PACKAGE, 0);
core = hwloc_get_obj_inside_cpuset_by_type(topology, package->cpuset,
HWLOC_OBJ_CORE, 2);
Whenever you want to bind a process or thread to a core, make sure you singlify its cpuset first, so that the task is actually bound to a single thread within this core (to avoid useless migrations).
/* bind on the second core */
hwloc_obj_t core = hwloc_get_obj_by_type(topology, HWLOC_OBJ_CORE, 1);
hwloc_cpuset_t set = hwloc_bitmap_dup(core->cpuset);
hwloc_bitmap_singlify(set);
hwloc_set_cpubind(topology, set, 0);
hwloc_bitmap_free(set);
With hwloc-calc or hwloc-bind command-line tools, you may specify that you only want a single-thread within each core by asking for their first PU object:
$ hwloc-calc core:4-7
0x0000ff00
$ hwloc-calc core:4-7.pu:0
0x00005500
When binding a process on the command-line, you may either specify the exact thread that you want to use, or ask hwloc-bind to singlify the cpuset before binding
$ hwloc-bind core:3.pu:0 -- echo "hello from first thread on core #3"
hello from first thread on core #3
...
$ hwloc-bind core:3 --single -- echo "hello from a single thread on core #3"
hello from a single thread on core #3
Advanced
I do not want hwloc to rediscover my enormous machine topology every time I rerun a process
Although the topology discovery is not expensive on common machines, its overhead may become significant when multiple processes repeat the discovery on large machines (for instance when starting one process per core in a parallel application). The machine topology usually does not vary much, except if some cores are stopped/restarted or if the administrator restrictions are modified. Thus rediscovering the whole topology again and again may look useless.
For this purpose, hwloc offers XML import/export and shared memory features.
XML lets you save the discovered topology to a file (for instance with the lstopo program) and reload it later by setting the HWLOC_XMLFILE environment variable. The HWLOC_THISSYSTEM environment variable should also be set to 1 to assert that loaded file is really the underlying system.
Loading a XML topology is usually much faster than querying multiple files or calling multiple functions of the operating system. It is also possible to manipulate such XML files with the C programming interface, and the import/export may also be directed to memory buffer (that may for instance be transmitted between applications through a package). See also Importing and exporting topologies from/to XML files.
- Note
- The environment variable HWLOC_THISSYSTEM_ALLOWED_RESOURCES may be used to load a XML topology that contains the entire machine and restrict it to the part that is actually available to the current process (e.g. when Linux Cgroup/Cpuset are used to restrict the set of resources). See Environment Variables.
Shared-memory topologies consist in one process exposing its topology in a shared-memory buffer so that other processes (running on the same machine) may use it directly. This has the advantage of reducing the memory footprint since a single topology is stored in physical memory for multiple processes. However, it requires all processes to map this shared-memory buffer at the same virtual address, which may be difficult in some cases. This API is described in Sharing topologies between processes.
How many topologies may I use in my program?
hwloc lets you manipulate multiple topologies at the same time. However, these topologies consume memory and system resources (for instance file descriptors) until they are destroyed. It is therefore discouraged to open the same topology multiple times.
Sharing a single topology between threads is easy (see Thread Safety) since the vast majority of accesses are read-only.
If multiple topologies of different (but similar) nodes are needed in your program, have a look at How to avoid memory waste when manipulating multiple similar topologies?.
How to avoid memory waste when manipulating multiple similar topologies?
hwloc does not share information between topologies. If multiple similar topologies are loaded in memory, for instance the topologies of different identical nodes of a cluster, lots of information will be duplicated.
hwloc/diff.h (see also Topology differences) offers the ability to compute topology differences, apply or unapply them, or export/import to/from XML. However, this feature is limited to basic differences such as attribute changes. It does not support complex modifications such as adding or removing some objects.
How do I annotate the topology with private notes?
Each hwloc object contains a userdata
field that may be used by applications to store private pointers. This field is only valid during the lifetime of these container object and topology. It becomes invalid as soon the topology is destroyed, or as soon as the object disappears, for instance when restricting the topology. The userdata field is not exported/imported to/from XML by default since hwloc does not know what it contains. This behavior may be changed by specifying application-specific callbacks with hwloc_topology_set_userdata_export_callback()
and hwloc_topology_set_userdata_import_callback()
.
Each object may also contain some info attributes (name and value strings) that are setup by hwloc during discovery and that may be extended by the user with hwloc_obj_add_info()
(see also Object attributes). Contrary to the userdata
field which is unique, multiple info attributes may exist for each object, even with the same name. These attributes are always exported to XML. However, only character strings may be used as names and values.
It is also possible to insert Misc objects with a custom name anywhere as a leaf of the topology (see Miscellaneous objects). And Misc objects may have their own userdata and info attributes just like any other object.
The hwloc-annotate command-line tool may be used for adding Misc objects and info attributes.
There is also a topology-specific userdata pointer that can be used to recognize different topologies by storing a custom pointer. It may be manipulated with hwloc_topology_set_userdata()
and hwloc_topology_get_userdata()
.
How do I create a custom heterogeneous and asymmetric topology?
Synthetic topologies (see Synthetic topologies) allow to create custom topologies but they are always symmetric: same numbers of cores in each package, same local NUMA nodes, same shared cache, etc. To create an asymmetric topology, for instance to simulate hybrid CPUs, one may want to start from a larger symmetric topology and restrict it.
Assuming we want two packages, one with 4 dual-threaded cores, and one with 8 single-threaded cores, first we create a topology with two identical packages, each with 8 dual-threaded cores:
$ lstopo -i "pack:2 core:8 pu:2" topo.xml
Then create the bitmask representing the PUs that we wish to keep and pass it to lstopo's restrict option:
$ hwloc-calc -i topo.xml pack:0.core:0-3.pu:0-1 pack:1.core:0-7.pu:0
0x555500ff
$ lstopo -i topo.xml --restrict 0x555500ff topo2.xml
$ mv -f topo2.xml topo.xml
To mark the cores of first package as Big (power hungry) and those of second package as Little (energy efficient), define CPU kinds:
$ hwloc-annotate topo.xml topo.xml -- none -- cpukind $(hwloc-calc -i topo.xml pack:0) 1 0 CoreType Big
$ hwloc-annotate topo.xml topo.xml -- none -- cpukind $(hwloc-calc -i topo.xml pack:1) 0 0 CoreType Little
A similar method may be used for heterogeneous memory. First we specify 2 NUMA nodes per package in our synthetic description:
$ lstopo -i "pack:2 [numa(memory=100GB)] [numa(memory=10GB)] core:8 pu:2" topo.xml
Then remove the second node of first package:
$ hwloc-calc -i topo.xml --nodeset node:all ~pack:0.node:1
0x0000000e
$ lstopo -i topo.xml --restrict nodeset=0xe topo2.xml
$ mv -f topo2.xml topo.xml
Then make one large node even bigger:
$ hwloc-annotate topo.xml topo.xml -- pack:0.numa:0 -- size 200GB
Now we have 200GB in first package, and 100GB+10GB in second package.
Next we may specify that the small NUMA node (second of second package) is HBM while the large ones are DRAM:
$ hwloc-annotate topo.xml topo.xml -- pack:0.numa:0 pack:1.numa:0 -- subtype DRAM
$ hwloc-annotate topo.xml topo.xml -- pack:1.numa:1 -- subtype HBM
Finally we may define memory performance attributes to specify that the HBM bandwidth (200GB/s) from local cores is higher than the DRAM bandwidth (50GB/s):
$ hwloc-annotate topo.xml topo.xml -- pack:0.numa:0 -- memattr Bandwidth pack:0 50000
$ hwloc-annotate topo.xml topo.xml -- pack:1.numa:0 -- memattr Bandwidth pack:1 50000
$ hwloc-annotate topo.xml topo.xml -- pack:1.numa:1 -- memattr Bandwidth pack:1 200000
There is currently no way to create or modify I/O devices attached to such fake topologies. There is also no way to have some partial levels, e.g. a L3 cache in one package but not in the other.
More changes may obviously be performed by manually modifying the XML export file. Simple operations such as modifying object attributes (cache size, memory size, name-value info attributes, etc.), moving I/O subtrees, moving Misc objects, or removing objects are easy to perform.
However, modifying CPU and Memory objects requires care since cpusets and nodesets are supposed to remain consistent between parents and children. Similarly, PCI bus IDs should remain consistent between bridges and children within an I/O subtree.
Caveats
Why is lstopo slow?
lstopo enables most hwloc objects and discovery flags by default so that the output topology is as precise as possible (while hwloc disables many of them by default). This includes I/O device discovery through PCI libraries as well as external libraries such as NVML. To speed up lstopo, you may disable such features with command-line options such as --no-io
.
When NVIDIA GPU probing is enabled (e.g. with CUDA or NVML), one may enable the Persistent mode (with nvidia-smi -pm 1
) to avoid significant GPU wakeup and initialization overhead.
When AMD GPU discovery is enabled with OpenCL and hwloc is used remotely over ssh, some spurious round-trips on the network may significantly increase the discovery time. Forcing the DISPLAY
environment variable to the remote X server display (usually :0
) instead of only setting the COMPUTE
variable may avoid this.
Also remember that these hwloc components may be disabled. At build-time, one may pass configure flags such as --disable-opencl
, --disable-cuda
, --disable-nvml
, --disable-rsmi
, and --disable-levelzero
. At runtime, one may set the environment variable HWLOC_COMPONENTS=-opencl,-cuda,-nvml,-rsmi,-levelzero
or call hwloc_topology_set_components().
Remember that these backends are disabled by default, except in lstopo. If hwloc itself is still too slow even after disabling all the I/O devices as explained above, see also What may I disable to make hwloc faster? for disabling even more features.
Does hwloc require privileged access?
hwloc discovers the topology by querying the operating system. Some minor features may require privileged access to the operation system. For instance memory module discovery on Linux is reserved to root, and the entire PCI discovery on Solaris and BSDs requires access to some special files that are usually restricted to root (/dev/pci* or /devices/pci*).
To workaround this limitation, it is recommended to export the topology as a XML file generated by the administrator (with the lstopo program) and make it available to all users (see Importing and exporting topologies from/to XML files). It will offer all discovery information to any application without requiring any privileged access anymore. Only the necessary hardware characteristics will be exported, no sensitive information will be disclosed through this XML export.
This XML-based model also has the advantage of speeding up the discovery because reading a XML topology is usually much faster than querying the operating system again.
The utility hwloc-dump-hwdata
is also involved in gathering privileged information at boot time and making it available to non-privileged users (note that this may require a specific SELinux MLS policy module). However, it only applies to Intel Xeon Phi processors for now (see Why do I need hwloc-dump-hwdata for memory on Intel Xeon Phi processor?). See also HWLOC_DUMPED_HWDATA_DIR
in Environment Variables for details about the location of dumped files.
What should I do when hwloc reports "operating system" warnings?
When the operating system reports invalid locality information (because of either software or hardware bugs), hwloc may fail to insert some objects in the topology because they cannot fit in the already built tree of resources. If so, hwloc will report a warning like the following. The object causing this error is ignored, the discovery continues but the resulting topology will miss some objects and may be asymmetric (see also What happens if my topology is asymmetric?).
****************************************************************************
* hwloc received invalid information from the operating system.
*
* L3 (cpuset 0x000003f0) intersects with NUMANode (P#0 cpuset 0x0000003f) without inclusion!
* Error occurred in topology.c line 940
*
* Please report this error message to the hwloc user's mailing list,
* along with the files generated by the hwloc-gather-topology script.
*
* hwloc will now ignore this invalid topology information and continue.
****************************************************************************
These errors are common on large AMD platforms because of BIOS and/or Linux kernel bugs causing invalid L3 cache information. In the above example, the hardware reports a L3 cache that is shared by 2 cores in the first NUMA node and 4 cores in the second NUMA node. That's wrong, it should actually be shared by all 6 cores in a single NUMA node. The resulting topology will miss some L3 caches.
If your application does not care about cache sharing, or if you do not plan to request cache-aware binding in your process launcher, you may likely ignore this error (and hide it by setting HWLOC_HIDE_ERRORS=1 in your environment).
Some platforms report similar warnings about conflicting Packages and NUMANodes.
On x86 hosts, passing HWLOC_COMPONENTS=x86
in the environment may workaround some of these issues by switching to a different way to discover the topology.
Upgrading the BIOS and/or the operating system may help. Otherwise, as explained in the message, reporting this issue to the hwloc developers (by sending the tarball that is generated by the hwloc-gather-topology script on this platform) is a good way to make sure that this is a software (operating system) or hardware bug (BIOS, etc).
See also Questions and Bugs. Opening an issue on GitHub automatically displays hints on what information you should provide when reporting such bugs.
Why does Valgrind complain about hwloc memory leaks?
If you are debugging your application with Valgrind, you want to avoid memory leak reports that are caused by hwloc and not by your program.
hwloc itself is often checked with Valgrind to make sure it does not leak memory. However, some global variables in hwloc dependencies are never freed. For instance libz allocates its global state once at startup and never frees it so that it may be reused later. Some libxml2 global state is also never freed because hwloc does not know whether it can safely ask libxml2 to free it (the application may also be using libxml2 outside of hwloc).
These unfreed variables cause leak reports in Valgrind. hwloc installs a Valgrind suppressions file to hide them. You should pass the following command-line option to Valgrind to use it:
--suppressions=/path/to/hwloc-valgrind.supp
Platform-specific
How do I enable ROCm SMI and select which version to use?
hwloc enables ROCm SMI as soon as it finds its development headers and libraries on the system. This detection consists in looking in /opt/rocm
by default. If a ROCm version was specified with --with-rocm-version=4.4.0
or in the ROCM_VERSION
environment variable, then /opt/rocm-<version>
is used instead. Finally, a specific installation path may be specified with --with-rocm=/path/to/rocm
.
As usual, developer header and library paths may also be set through environment variables such as LIBRARY_PATH
and C_INCLUDE_PATH
.
To find out whether ROCm SMI was detected and enabled, look in Probe / display I/O devices at the end of the configure script output. Passing --enable-rsmi
will also cause configure to fail if RSMI could not be found and enabled in hwloc.
How do I enable CUDA and select which CUDA version to use?
hwloc enables CUDA as soon as it finds CUDA development headers and libraries on the system. This detection may be performed thanks to pkg-config
but it requires hwloc to know which CUDA version to look for. This may be done by passing --with-cuda-version=11.0
to the configure script. Otherwise hwloc will also look for the CUDA_VERSION
environment variable.
If pkg-config
does not work, passing --with-cuda=/path/to/cuda
to the configure script is another way to define the corresponding library and header paths. Finally, these paths may also be set through environment variables such as LIBRARY_PATH
and C_INCLUDE_PATH
.
These paths, either detected by pkg-config
or given manually, will also be used to detect NVML and OpenCL libraries and enable their hwloc backends.
To find out whether CUDA was detected and enabled, look in Probe / display I/O devices at the end of the configure script output. Passing --enable-cuda
will also cause configure to fail if CUDA could not be found and enabled in hwloc.
Note that --with-cuda=/nonexisting
may be used to disable all dependencies that are installed by CUDA, i.e. the CUDA, NVML and NVIDIA OpenCL backends, since the given directory does not exist.
How do I find the local MCDRAM NUMA node on Intel Xeon Phi processor?
Intel Xeon Phi processors introduced a new memory architecture by possibly having two distinct local memories: some normal memory (DDR) and some high-bandwidth on-package memory (MCDRAM). Processors can be configured in various clustering modes to have up to 4 Clusters. Moreover, each Cluster (quarter, half or whole processor) of the processor may have its own local parts of the DDR and of the MCDRAM. This memory and clustering configuration may be probed by looking at MemoryMode and ClusterMode attributes, see Hardware Platform Information and doc/examples/get-knl-modes.c in the source directory.
Starting with version 2.0, hwloc properly exposes this memory configuration. DDR and MCDRAM are attached as two memory children of the same parent, DDR first, and MCDRAM second if any. Depending on the processor configuration, that parent may be a Package, a Cache, or a Group object of type Cluster
.
Hence cores may have one or two local NUMA nodes, listed by the core nodeset. An application may allocate local memory from a core by using that nodeset. The operating system will actually allocate from the DDR when possible, or fallback to the MCDRAM.
To allocate specifically on one of these memories, one should walk up the parent pointers until finding an object with some memory children. Looking at these memory children will give the DDR first, then the MCDRAM if any. Their nodeset may then be used for allocating or binding memory buffers.
One may also traverse the list of NUMA nodes until finding some whose cpuset matches the target core or PUs. The MCDRAM NUMA nodes may be identified thanks to the subtype
field which is set to MCDRAM
.
Command-line tools such as hwloc-bind
may bind memory on the MCDRAM by using the hbm keyword. For instance, to bind on the first MCDRAM NUMA node:
$ hwloc-bind --membind --hbm numa:0 -- myprogram
$ hwloc-bind --membind numa:0 -- myprogram
Why do I need hwloc-dump-hwdata for memory on Intel Xeon Phi processor?
Intel Xeon Phi processors may use the on-package memory (MCDRAM) as either memory or a memory-side cache (reported as a L3 cache by hwloc by default, see HWLOC_KNL_MSCACHE_L3
in Environment Variables). There are also several clustering modes that significantly affect the memory organization (see How do I find the local MCDRAM NUMA node on Intel Xeon Phi processor? for more information about these modes). Details about these are currently only available to privileged users. Without them, hwloc relies on a heuristic for guessing the modes.
The hwloc-dump-hwdata utility may be used to dump this privileged binary information into human-readable and world-accessible files that the hwloc library will later load. The utility should usually run as root once during boot, in order to update dumped information (stored under /var/run/hwloc by default) in case the MCDRAM or clustering configuration changed between reboots.
When SELinux MLS policy is enabled, a specific hwloc policy module may be required so that all users get access to the dumped files (in /var/run/hwloc by default). One may use hwloc policy files from the SELinux Reference Policy at https://github.com/TresysTechnology/refpolicy-contrib (see also the documentation at https://github.com/TresysTechnology/refpolicy/wiki/GettingStarted).
hwloc-dump-hwdata requires dmi-sysfs
kernel module loaded.
The utility is currently unneeded on platforms without Intel Xeon Phi processors.
See HWLOC_DUMPED_HWDATA_DIR
in Environment Variables for details about the location of dumped files.
How do I build hwloc for BlueGene/Q?
IBM BlueGene/Q machines run a standard Linux on the login/frontend nodes and a custom CNK (Compute Node Kernel) on the compute nodes.
To discover the topology of a login/frontend node, hwloc should be configured as usual, without any BlueGene/Q-specific option.
However, one would likely rather discover the topology of the compute nodes where parallel jobs are actually running. If so, hwloc must be cross-compiled with the following configuration line:
./configure --host=powerpc64-bgq-linux --disable-shared --enable-static \
CPPFLAGS='-I/bgsys/drivers/ppcfloor -I/bgsys/drivers/ppcfloor/spi/include/kernel/cnk/'
CPPFLAGS may have to be updated if your platform headers are installed in a different directory.
How do I build hwloc for Windows?
hwloc binary releases for Windows are available on the website download pages (as pre-built ZIPs for both 32bits and 64bits x86 platforms). However hwloc also offers several ways to build on Windows:
-
The usual Unix build steps (
configure
, make
and make install
) work on the MSYS2/MinGW environment on Windows (the official hwloc binary releases are built this way). Some environment variables and options must be configured, see contrib/ci.inria.fr/job-3-mingw.sh
in the hwloc repository for an example (used for nightly testing).
-
hwloc also supports such Unix-like builds in Cygwin (environment for porting Unix code to Windows).
-
Windows build is also possible with CMake (
CMakeLists.txt
available under contrib/windows-cmake/
).
-
hwloc also comes with an example of Microsoft Visual Studio solution (under
contrib/windows/
) that may serve as a base for custom builds.
How to get useful topology information on NetBSD?
The NetBSD (and FreeBSD) backend uses x86-specific topology discovery (through the x86 component). This implementation requires CPU binding so as to query topology information from each individual processor. This means that hwloc cannot find any useful topology information unless user-level process binding is allowed by the NetBSD kernel. The security.models.extensions.user_set_cpu_affinity
sysctl variable must be set to 1 to do so. Otherwise, only the number of processors will be detected.
Why does binding fail on AIX?
The AIX operating system requires specific user capabilities for attaching processes to resource sets (CAP_NUMA_ATTACH). Otherwise functions such as hwloc_set_cpubind() fail (return -1 with errno set to EPERM).
This capability must also be inherited (through the additional CAP_PROPAGATE capability) if you plan to bind a process before forking another process, for instance with hwloc-bind
.
These capabilities may be given by the administrator with:
chuser "capabilities=CAP_PROPAGATE,CAP_NUMA_ATTACH" <username>
Compatibility between hwloc versions
How do I handle API changes?
The hwloc interface is extended with every new major release. Any application using the hwloc API should be prepared to check at compile-time whether some features are available in the currently installed hwloc distribution.
For instance, to check whether the hwloc version is at least 2.0, you should use:
#include <hwloc.h>
#if HWLOC_API_VERSION >= 0x00020000
...
#endif
To check for the API of release X.Y.Z at build time, you may compare HWLOC_API_VERSION with (X<<16)+(Y<<8)+Z
.
For supporting older releases that do not have HWLOC_OBJ_NUMANODE
and HWLOC_OBJ_PACKAGE
yet, you may use:
#include <hwloc.h>
#if HWLOC_API_VERSION < 0x00010b00
#define HWLOC_OBJ_NUMANODE HWLOC_OBJ_NODE
#define HWLOC_OBJ_PACKAGE HWLOC_OBJ_SOCKET
#endif
Once a program is built against a hwloc library, it may also dynamically link with compatible libraries from other hwloc releases. The version of that runtime library may be queried with hwloc_get_api_version(). For instance, the following code enables the topology flag HWLOC_TOPOLOGY_FLAG_NO_DISTANCES when compiling on hwloc 2.8 or later, but it disables it at runtime if running on an older hwloc (otherwise hwloc_topology_set_flags() would fail).
unsigned long topology_flags = ...; /* wanted flags that were supported before 2.8 */
#if HWLOC_API_VERSION >= 0x20800
if (hwloc_get_api_version() >= 0x20800)
topology_flags |= HWLOC_TOPOLOGY_FLAG_NO_DISTANCES; /* wanted flags only supported in 2.8+ */
#endif
hwloc_topology_set_flags(topology, topology_flags);
See also How do I handle ABI breaks? for using hwloc_get_api_version() for testing ABI compatibility.
What is the difference between API and library version numbers?
HWLOC_API_VERSION is the version of the API. It changes when functions are added, modified, etc. However it does not necessarily change from one release to another. For instance, two releases of the same series (e.g. 2.0.3 and 2.0.4) usually have the same HWLOC_API_VERSION (0x00020000
). However their HWLOC_VERSION strings are different ("2.0.3"
and "2.0.4"
respectively).
How do I handle ABI breaks?
The hwloc interface was deeply modified in release 2.0 to fix several issues of the 1.x interface (see Upgrading to the hwloc 2.0 API and the NEWS file in the source directory for details). The ABI was broken, which means applications must be recompiled against the new 2.0 interface.
To check that you are not mixing old/recent headers with a recent/old runtime library, check the major revision number in the API version:
#include <hwloc.h>
unsigned version = hwloc_get_api_version();
if ((version >> 16) != (HWLOC_API_VERSION >> 16)) {
fprintf(stderr,
"%s compiled for hwloc API 0x%x but running on library API 0x%x.\n"
"You may need to point LD_LIBRARY_PATH to the right hwloc library.\n"
"Aborting since the new ABI is not backward compatible.\n",
callname, HWLOC_API_VERSION, version);
exit(EXIT_FAILURE);
}
To specifically detect v2.0 issues:
#include <hwloc.h>
#if HWLOC_API_VERSION >= 0x00020000
/* headers are recent */
if (hwloc_get_api_version() < 0x20000)
... error out, the hwloc runtime library is older than 2.0 ...
#else
/* headers are pre-2.0 */
if (hwloc_get_api_version() >= 0x20000)
... error out, the hwloc runtime library is more recent than 2.0 ...
#endif
In theory, library sonames prevent linking with incompatible libraries. However custom hwloc installations or improperly configured build environments may still lead to such issues. Hence running one of the above (cheap) checks before initializing hwloc topology may be useful.
Are XML topology files compatible between hwloc releases?
XML topology files are forward-compatible: a XML file may be loaded by a hwloc library that is more recent than the hwloc release that exported that file.
However, hwloc XMLs are not always backward-compatible: Topologies exported by hwloc 2.x cannot be imported by 1.x by default (see XML changes for working around such issues). There are also some corner cases where backward compatibility is not guaranteed because of changes between major releases (for instance 1.11 XMLs could not be imported in 1.10).
XMLs are exchanged at runtime between some components of the HPC software stack (for instance the resource managers and MPI processes). Building all these components on the same (cluster-wide) hwloc installation is a good way to avoid such incompatibilities.
Are synthetic strings compatible between hwloc releases?
Synthetic strings (see Synthetic topologies) are forward-compatible: a synthetic string generated by a release may be imported by future hwloc libraries.
However they are often not backward-compatible because new details may have been added to synthetic descriptions in recent releases. Some flags may be given to hwloc_topology_export_synthetic() to avoid such details and stay backward compatible.
Is it possible to share a shared-memory topology between different hwloc releases?
Shared-memory topologies (see Sharing topologies between processes) have strong requirements on compatibility between hwloc libraries. Adopting a shared-memory topology fails if it was exported by a non-compatible hwloc release. Releases with same major revision are usually compatible (e.g. hwloc 2.0.4 may adopt a topology exported by 2.0.3) but different major revisions may be incompatible (e.g. hwloc 2.1.0 cannot adopt from 2.0.x).
Topologies are shared at runtime between some components of the HPC software stack (for instance the resource managers and MPI processes). Building all these components on the same (system-wide) hwloc installation is a good way to avoid such incompatibilities.