You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: AI-and-Analytics/Getting-Started-Samples/IntelJAX_GettingStarted/README.md
+63-78Lines changed: 63 additions & 78 deletions
Original file line number
Diff line number
Diff line change
@@ -1,59 +1,48 @@
1
-
# `TensorFlow* Getting Started` Sample
1
+
# `JAX Getting Started` Sample
2
2
3
-
The `TensorFlow* Getting Started` sample demonstrates how to train a TensorFlow* model and run inference on Intel® hardware.
3
+
The `JAX Getting Started` sample demonstrates how to train a JAX model and run inference on Intel® hardware.
4
4
| Property | Description
5
5
|:--- |:---
6
6
| Category | Get Start Sample
7
-
| What you will learn | How to start using TensorFlow* on Intel® hardware.
7
+
| What you will learn | How to start using JAX* on Intel® hardware.
8
8
| Time to complete | 10 minutes
9
9
10
10
## Purpose
11
11
12
-
TensorFlow* is a widely-used machine learning framework in the deep learning arena, demanding efficient computational resource utilization. To take full advantage of Intel® architecture and to extract maximum performance, the TensorFlow* framework has been optimized using Intel® oneDNN primitives. This sample demonstrates how to train an example neural network and shows how Intel-optimized TensorFlow* enables Intel® oneDNN calls by default. Intel-optimized TensorFlow* is available as part of the Intel® AI Tools.
12
+
JAX is a high-performance numerical computing library that enables automatic differentiation. It provides features like just-in-time compilation and efficient parallelization for machine learning and scientific computing tasks.
13
13
14
-
This sample code shows how to get started with TensorFlow*. It implements an example neural network with one convolution layer and one ReLU layer. You can build and train a TensorFlow* neural network using a simple Python code. Also, by controlling the build-in environment variable, this sample attempts to demonstrate explicitly how Intel® oneDNN Primitives are called and shows their performance during the neural network training.
14
+
This sample code shows how to get started with JAX on CPU. The sample code defines a simple neural network that trains on the MNIST dataset using JAX for parallel computations across multiple CPU cores. The network trains over multiple epochs, evaluates accuracy, and adjusts parameters using stochastic gradient descent across devices.
15
15
16
16
## Prerequisites
17
17
18
18
| Optimized for | Description
19
19
|:--- |:---
20
20
| OS | Ubuntu* 22.0.4 and newer
21
21
| Hardware | Intel® Xeon® Scalable processor family
22
-
| Software | TensorFlow
22
+
| Software | JAX
23
23
24
24
> **Note**: AI and Analytics samples are validated on AI Tools Offline Installer. For the full list of validated platforms refer to [Platform Validation](https://github.com/oneapi-src/oneAPI-samples/tree/master?tab=readme-ov-file#platform-validation).
25
25
26
26
## Key Implementation Details
27
27
28
-
The sample includes one python file: TensorFlow_HelloWorld.py. it implements a simple neural network's training and inference
29
-
- The training data is generated by `np.random`.
30
-
- The neural network with one convolution layer and one ReLU layer is created by `tf.nn.conv2d` and `tf.nn.relu`.
31
-
- The TF session is initialized by `tf.global_variables_initializer`.
32
-
- The train is implemented via the below for-loop:
33
-
```python
34
-
for epoch inrange(0, EPOCHNUM):
35
-
for step inrange(0, BS_TRAIN):
36
-
x_batch = x_data[step*N:(step+1)*N, :, :, :]
37
-
y_batch = y_data[step*N:(step+1)*N, :, :, :]
38
-
s.run(train, feed_dict={x: x_batch, y: y_batch})
39
-
```
40
-
In order to show the harware information, you must export the environment variable `export ONEDNN_VERBOSE=1` to display the deep learning primitives trace during execution.
41
-
>**Note**: For convenience, code line os.environ["ONEDNN_VERBOSE"] ="1" has been added in the body of the script as an alternative method to setting this variable.
42
-
43
-
Runtime settings for`ONEDNN_VERBOSE`, `KMP_AFFINITY`, and`Inter/Intra-op` Threads are set within the script. You can read more about these settings in this dedicated document: *[Maximize TensorFlow* Performance on CPU: Considerations and Recommendations for Inference Workloads](https://software.intel.com/en-us/articles/maximize-tensorflow-performance-on-cpu-considerations-and-recommendations-for-inference)*.
44
-
45
-
### Run the Sample on Intel® GPUs
46
-
The sample code isCPU based, but you can run it using Intel® Extension for TensorFlow*with Intel® Data Center GPU Flex Series. If you are using the Intel GPU, refer to *[Intel GPU Software Installation Guide](https://intel.github.io/intel-extension-for-tensorflow/latest/docs/install/install_for_gpu.html)*. The sample should be able to run on GPU**without any code changes**.
47
-
48
-
For details, refer to the *[Quick Example on Intel CPUandGPU](https://intel.github.io/intel-extension-for-tensorflow/latest/examples/quick_example.html)* topic of the *Intel® Extension for TensorFlow** documentation.
28
+
The getting-started sample code uses the python file 'spmd_mnist_classifier_fromscratch.py' under the examples directory in the
It implements a simple neural network's training and inference for mnist images. The images are downloaded to a temporary directory when the example is run first.
31
+
-**init_random_params** initializes the neural network weights and biases for each layer.
32
+
-**predict** computes the forward pass of the network, applying weights, biases, and activations to inputs.
33
+
-**loss** calculates the cross-entropy loss between predictions and target labels.
34
+
-**spmd_update** performs parallel gradient updates across multiple devices using JAX’s pmap and lax.psum.
35
+
-**accuracy** computes the accuracy of the model by predicting the class of each input in the batch and comparing it to the true target class. It uses the *jnp.argmax* function to find the predicted class and then computes the mean of correct predictions.
36
+
-**data_stream** function generates batches of shuffled training data. It reshapes the data so that it can be split across multiple cores, ensuring that the batch size is divisible by the number of cores for parallel processing.
37
+
-**training loop** trains the model for a set number of epochs, updating parameters and printing training/test accuracy after each epoch. The parameters are replicated across devices and updated in parallel using spmd_update. After each epoch, the model’s accuracy is evaluated on both training and test data using accuracy.
49
38
50
39
## Environment Setup
51
40
52
41
You will need to download and install the following toolkits, tools, and components to use the sample.
53
42
54
43
**1. Get Intel® AI Tools**
55
44
56
-
Required AI Tools: 'Intel® Extension for TensorFlow* - CPU'
45
+
Required AI Tools: 'JAX'
57
46
<br>If you have not already, select and install these Tools via [AI Tools Selector](https://www.intel.com/content/www/us/en/developer/tools/oneapi/ai-tools-selector.html). AI and Analytics samples are validated on AI Tools Offline Installer. It is recommended to select Offline Installer option in AI Tools Selector.<br>
58
47
please see the [supported versions](https://www.intel.com/content/www/us/en/developer/tools/oneapi/ai-tools-selector.html).
AI Tools Docker images already have Get Started samples pre-installed. Refer to [Working with Preset Containers](https://github.com/intel/ai-containers/tree/main/preset) to learn how to run the docker and samples.
100
87
## Example Output
101
-
1. With the initial run, you should see results similar to the following:
102
-
103
-
```
104
-
00.4147554
105
-
10.3561021
106
-
20.33979267
107
-
30.33283564
108
-
40.32920069
109
-
[CODE_SAMPLE_COMPLETED_SUCCESSFULLY]
110
-
```
111
-
2. Export `ONEDNN_VERBOSE`as1in the command line. The oneDNN run-time verbose trace should look similar to the following:
112
-
```
113
-
export ONEDNN_VERBOSE=1
114
-
Windows: setONEDNN_VERBOSE=1
115
-
```
116
-
>**Note**: The historical environment variables include `DNNL_VERBOSE`and`MKLDNN_VERBOSE`.
117
-
118
-
3. Run the sample again. You should see verbose results similar to the following:
119
-
```
120
-
2024-03-1216:01:59.784340: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:117] Plugin optimizer for device_type CPUis enabled.
>**Note**: See the *[oneAPI Deep Neural Network Library Developer Guide and Reference](https://oneapi-src.github.io/oneDNN/dev_guide_verbose.html)*for more details on the verbose log.
88
+
1. When the program is run, you should see results similar to the following:
136
89
137
-
4. Troubleshooting
138
-
139
-
If you receive an error message, troubleshoot the problem using the **Diagnostics Utility for Intel® oneAPI Toolkits**. The diagnostic utility provides configuration and system checks to help find missing dependencies, permissions errors, and other issues. See the *[Diagnostics Utility for Intel® oneAPI Toolkits User Guide](https://www.intel.com/content/www/us/en/develop/documentation/diagnostic-utility-user-guide/top.html)*for more information on using the utility.
140
-
or ask support from https://github.com/intel/intel-extension-for-tensorflow
90
+
```
91
+
downloaded https://storage.googleapis.com/cvdf-datasets/mnist/train-images-idx3-ubyte.gz to /tmp/jax_example_data/
92
+
downloaded https://storage.googleapis.com/cvdf-datasets/mnist/train-labels-idx1-ubyte.gz to /tmp/jax_example_data/
93
+
downloaded https://storage.googleapis.com/cvdf-datasets/mnist/t10k-images-idx3-ubyte.gz to /tmp/jax_example_data/
94
+
downloaded https://storage.googleapis.com/cvdf-datasets/mnist/t10k-labels-idx1-ubyte.gz to /tmp/jax_example_data/
95
+
Epoch 0 in 2.71 sec
96
+
Training set accuracy 0.7381166815757751
97
+
Test set accuracy 0.7516999840736389
98
+
Epoch 1 in 2.35 sec
99
+
Training set accuracy 0.81454998254776
100
+
Test set accuracy 0.8277999758720398
101
+
Epoch 2 in 2.33 sec
102
+
Training set accuracy 0.8448166847229004
103
+
Test set accuracy 0.8568999767303467
104
+
Epoch 3 in 2.34 sec
105
+
Training set accuracy 0.8626833558082581
106
+
Test set accuracy 0.8715999722480774
107
+
Epoch 4 in 2.30 sec
108
+
Training set accuracy 0.8752999901771545
109
+
Test set accuracy 0.8816999793052673
110
+
Epoch 5 in 2.33 sec
111
+
Training set accuracy 0.8839333653450012
112
+
Test set accuracy 0.8899999856948853
113
+
Epoch 6 in 2.37 sec
114
+
Training set accuracy 0.8908833265304565
115
+
Test set accuracy 0.8944999575614929
116
+
Epoch 7 in 2.31 sec
117
+
Training set accuracy 0.8964999914169312
118
+
Test set accuracy 0.8986999988555908
119
+
Epoch 8 in 2.28 sec
120
+
Training set accuracy 0.9016000032424927
121
+
Test set accuracy 0.9034000039100647
122
+
Epoch 9 in 2.31 sec
123
+
Training set accuracy 0.9060333371162415
124
+
Test set accuracy 0.9059999585151672
125
+
```
141
126
142
-
## Related Samples
127
+
2. Troubleshooting
143
128
144
-
* [Intel Extension For TensorFlow Getting Started Sample](https://github.com/oneapi-src/oneAPI-samples/blob/development/AI-and-Analytics/Getting-Started-Samples/Intel_Extension_For_TensorFlow_GettingStarted/README.md)
129
+
If you receive an error message, troubleshoot the problem using the **Diagnostics Utility for Intel® oneAPI Toolkits**. The diagnostic utility provides configuration and system checks to help find missing dependencies, permissions errors, and other issues. See the *[Diagnostics Utility for Intel® oneAPI Toolkits User Guide](https://www.intel.com/content/www/us/en/develop/documentation/diagnostic-utility-user-guide/top.html)* for more information on using the utility
0 commit comments