@@ -522,3 +522,69 @@ advanced usage, please refer to the original paper:
522
522
`IR2Vec: LLVM IR Based Scalable Program Embeddings <https://doi.org/10.1145/3418463 >`_.
523
523
The LLVM source code for ``IR2Vec `` can also be explored to understand the
524
524
implementation details.
525
+
526
+ Building with ML support
527
+ ========================
528
+
529
+ **NOTE ** For up to date information on custom builds, see the ``ml-* ``
530
+ `build bots <http://lab.llvm.org >`_. They are set up using
531
+ `like this <https://github.com/google/ml-compiler-opt/blob/main/buildbot/buildbot_init.sh >`_.
532
+
533
+ Embed pre-trained models (aka "release" mode)
534
+ ---------------------------------------------
535
+
536
+ This supports the ``ReleaseModeModelRunner `` model runners.
537
+
538
+ You need a tensorflow pip package for the AOT (ahead-of-time) Saved Model compiler
539
+ and a thin wrapper for the native function generated by it. We currently support
540
+ TF 2.15. We recommend using a python virtual env (in which case, remember to
541
+ pass ``-DPython3_ROOT_DIR `` to ``cmake ``).
542
+
543
+ Once you install the pip package, find where it was installed:
544
+
545
+ .. code-block :: console
546
+
547
+ TF_PIP=$(sudo -u buildbot python3 -c "import tensorflow as tf; import os; print(os.path.dirname(tf.__file__))")``
548
+
549
+ Then build LLVM:
550
+
551
+ .. code-block :: console
552
+
553
+ cmake -DTENSORFLOW_AOT_PATH=$TF_PIP \
554
+ -DLLVM_INLINER_MODEL_PATH=<path to inliner saved model dir> \
555
+ -DLLVM_RAEVICT_MODEL_PATH=<path to regalloc eviction saved model dir> \
556
+ <...other options...>
557
+
558
+ The example shows the flags for both inlining and regalloc, but either may be
559
+ omitted.
560
+
561
+ You can also specify a URL for the path, and it is also possible to pre-compile
562
+ the header and object and then just point to the precompiled artifacts. See for
563
+ example ``LLVM_OVERRIDE_MODEL_HEADER_INLINERSIZEMODEL ``.
564
+
565
+ **Note ** that we are transitioning away from the AOT compiler shipping with the
566
+ tensorflow package, and to a EmitC, in-tree solution, so these details will
567
+ change soon.
568
+
569
+ Using TFLite (aka "development" mode)
570
+ -------------------------------------
571
+
572
+ This supports the ``ModelUnderTrainingRunner `` model runners.
573
+
574
+ Build the TFLite package using `this script <https://raw.githubusercontent.com/google/ml-compiler-opt/refs/heads/main/buildbot/build_tflite.sh >`_.
575
+ Then, assuming you ran that script in ``/tmp/tflitebuild ``, just pass
576
+ ``-C /tmp/tflitebuild/tflite.cmake `` to the ``cmake `` for LLVM.
577
+
578
+ Interactive Mode (for training / research)
579
+ ------------------------------------------
580
+
581
+ The ``InteractiveModelRunner `` is available with no extra dependencies. For the
582
+ optimizations that are currently MLGO-enabled, it may be used as follows:
583
+
584
+ - for inlining: ``-mllvm -enable-ml-inliner=release -mllvm -inliner-interactive-channel-base=<name> ``
585
+ - for regalloc eviction: ``-mllvm -regalloc-evict-advisor=release -mllvm -regalloc-evict-interactive-channel-base=<name> ``
586
+
587
+ where the ``name `` is a path fragment. We will expect to find 2 files,
588
+ ``<name>.in `` (readable, data incoming from the managing process) and
589
+ ``<name>.out `` (writable, the model runner sends data to the managing process)
590
+
0 commit comments