Skip to content
This repository was archived by the owner on Oct 6, 2025. It is now read-only.

Conversation

mweinelt
Copy link

@mweinelt mweinelt commented Sep 27, 2023

As a distro, we are packaging tensorflow, but not tflite. The latter is a small cut-out of tensorflow, so they share the same entrypoint.

>>> import tensorflow.lite as tflite
2023-09-29 21:46:53.737907: I tensorflow/core/util/port.cc:110] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2023-09-29 21:46:53.755324: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 AVX_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
>>> tflite.Interpreter
<class 'tensorflow.lite.python.interpreter.Interpreter'>

This allows us to provide tensorflow in place of tflite, at the cost of a higher runtime closure size, but reduced maintenance load.

@mweinelt mweinelt marked this pull request as draft September 29, 2023 19:37
As a distro we are packaging tensorflow, but not tflite. The latter is a
small cut-out of tensorflow, so they share the same entrypoint.

This allows us to provide tensorflow in place of tflite, at the cost of
a higher runtime closure size, but reduced maintenance load.
@mweinelt mweinelt force-pushed the allow-tensorflow-drop-in branch from f1e1889 to 23b1bc9 Compare September 29, 2023 19:46
@mweinelt mweinelt marked this pull request as ready for review September 29, 2023 20:37
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant