A packaging prelude:
Before you can even worry about reading resource files, the first step is to make sure that the data files are getting packaged into your distribution in the first place - it is easy to read them directly from the source tree, but the important part is making sure these resource files are accessible from code within an installed package.
Structure your project like this, putting data files into a subdirectory within the package:
. <--- project root
├── mypackage <--- source root
│ ├── __init__.py
│ ├── templates <--- resources subdirectory
│ │ └── temp_file <--- this is a data file, not code
│ ├── mymodule1.py
│ └── mymodule2.py
├── README.rst
├── MANIFEST.in
└── setup.py
You should pass include_package_data=True in the setup() call. The manifest file is only needed if you want to use setuptools/distutils and build source distributions. To make sure the templates/temp_file gets packaged for this example project structure, add a line like this into the manifest file:
recursive-include package *
Historical cruft note: Using a manifest file is not needed for modern build backends such as flit, poetry, which will include the package data files by default. So, if you're using pyproject.toml and you don't have a setup.py file then you can ignore all the stuff about MANIFEST.in.
Now, with packaging out of the way, onto the reading part...
Recommendation:
Use importlib.resources.files, it returns a traversible for accessing resources with usage similar to pathlib:
import importlib_resources
my_resources = importlib_resources.files("mypackage")
data = my_resources.joinpath("templates", "temp_file").read_bytes()
This works on Python 2 and 3, it works in zips, and it doesn't require spurious __init__.py files to be added in resource subdirectories.
Python 3.9+ is required. For older Python versions, there is a backport available to install which supports the same APIs.
Bad ways to avoid:
Bad way #1: using relative paths from a source file
This was previously described in the accepted answer. At best, it looks something like this:
from pathlib import Path
resource_path = Path(__file__).parent / "templates"
data = resource_path.joinpath("temp_file").read_bytes()
What's wrong with that? The assumption that you have files and subdirectories available is not correct. This approach doesn't work if executing code which is packed in a zip or a wheel, and it may be entirely out of the user's control whether or not your package gets extracted to a filesystem at all.
Bad way #2: using pkg_resources APIs
This is described in the top-voted answer. It looks something like this:
from pkg_resources import resource_string
data = resource_string(__name__, "templates/temp_file")
What's wrong with that? It adds a runtime dependency on setuptools, which should preferably be an install time dependency only. Importing and using pkg_resources can become really slow, as the code builds up a working set of all installed packages, even though you were only interested in your own package resources. That's not a big deal at install time (since installation is once-off), but it's ugly at runtime.
Bad way #3: using legacy importlib.resources APIs
This is currently was previously the recommendation of the top-voted answer. It's in the standard library since Python 3.7. It looks like this:
from importlib.resources import read_binary
data = read_binary("mypackage.templates", "temp_file")
What's wrong with that? Well, unfortunately, the implementation left some things to be desired and it is likely to be was deprecated in Python 3.11. Using importlib.resources.read_binary, importlib.resources.read_text and friends will require you to add an empty file templates/__init__.py so that data files reside within a sub-package rather than in a subdirectory. It will also expose the mypackage/templates subdirectory as an importable mypackage.templates sub-package in its own right. This won't work with many existing packages which are already published using resource subdirectories instead of resource sub-packages, and it's inconvenient to add the __init__.py files everywhere muddying the boundary between data and code.
This approach was deprecated in upstream importlib_resources in 2021, and was deprecated in stdlib from version Python 3.11. bpo-45514 tracked the deprecation and migrating from legacy offers _legacy.py wrappers to aid with transition.
Even more confusingly, the functional APIs may become "undeprecated" again in Python 3.13, and the names remain the same but the usage is subtly different: read_binary, read_text.
Honorable mention: using pkgutil
Long before importlib.resources existed, there was a standard library pkgutil module for accessing resources. It actually still works fine! It looks like this in library code
# within mypackage/mymodule1.py, for example
import pkgutil
data = pkgutil.get_data(__name__, "templates/temp_file")
It works in zips. It works on Python 2 and Python 3. It doesn't require any third-party dependencies. It's probably more battle-tested than importlib.resources, and might even be a better choice if you need to support a wide range of Python versions.
Example project:
I've created an example project on GitHub and uploaded on PyPI, which demonstrates all five approaches discussed above. Try it out with:
$ pip install resources-example
$ resources-example
See https://github.com/wimglenn/resources-example for more info.