1

I have a C++ application that uses machine learning from Python and my current approach is making a single file executable with pyInstaller and then just running it from C++. This have obvious drawbacks, notably interapplication communication. At the moment I'm using an intermediate JSON file to talk to each other but this is massively suboptimal for my future requirements. What's beautiful about this, is that is working on all major platforms without too much hassle.

Section 1.6. from Python's manual reads "Compiling and Linking under Unix-like systems"

Does this mean that Python interpreter will be inside my application binary and target system doesn't need to have Python installed as the program will always use embedded interpreter ? If so, whats with python libraries ? Can I embed a whole conda enviroment ?

Also, whats with:

"(...) under Unix-like systems"

Does this means this approach is not multiplatform ?

Thanks in advance.

1
  • 1
    The problems and complications you get with embedding the interpreter in any non-trivial program are easy to underestimate. If the only reason you do this is to simplify IPC you might want to look up the sample of how to call a Python function from C. There are a whole lot of perfectly fine IPC solutions that will be not much less efficient and easier to use than that. Been there done that, wished I hadn't. Commented May 15, 2020 at 17:36

2 Answers 2

1

Embedding the Python interpreter is possible on all platforms. However it will only be the interpreter. Embedding any libraries will be a lot harder or even impossible.

But since you seem to deploy the Python libs already, you can use them just fine from the embedded interpreter. And then you could bridge C++ and Python without IPC, since they are both running in the same process.

pybind11 is very nice for embedding and generating C++ <-> Python interfaces.

A possible alternative, depending on the libraries in use, may be to export the model and use a C++ library to load and use it (for instance Tensorflow -> ONNX -> ONNX runtime).

Sign up to request clarification or add additional context in comments.

1 Comment

Thanks for the ONNX info, it is possible that importing just the model its the best way to do this
1

It means that cpython (Python interpreter) will be inside your application. You will be able to run Python code and observe and manipulate virtual machine state directly from C++ code (good entry point C API reference is here). Your application might have some additional dynamic library dependencies (which ones depends on compilation options of embedded Python). Also interpreter isn't completely self contained and depends on some external .py modules normally shipped with Python distribution (as standard library). If you plan to import external modules that expect standard library, you will have to ship it with your application. There are ways how to build modules into binary too (freeze) but you might run into issues specially with modules that rely on filesystem.

As far as I tried, this procedure works on UNIX like systems and Windows (where easiest way is to link against DLL which you then ship with your application). On Windows you also need to make sure that you compile with same compiler as was used to compile DLL (or you compile Python DLL from source). Here is additional information about embedding on Windows: https://docs.python.org/3/faq/windows.html#how-can-i-embed-python-into-a-windows-application

Just note that embedding Python and shipping 3rd party modules with your application might have some licensing consequences.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.