8

I am working on a Serverless Flask app that is deployed to AWS Lambda. The program uses the Cryptography library (using version 3.4.7). Locally, the program runs fine without any issue. However, whenever deployed on Lambda, the following error appears:

from cryptography.fernet import Fernet

File "/var/task/cryptography/fernet.py", line 16, in <module>

from cryptography.hazmat.primitives import hashes, padding

File "/var/task/cryptography/hazmat/primitives/padding.py", line 11, in <module>

from cryptography.hazmat.bindings._padding import lib

ImportError: /var/task/cryptography/hazmat/bindings/_padding.abi3.so: cannot open shared object file: No such file or directory

And when using some required functions from the "Hazardous Material" module, a very similar error appears:

File "/var/task/cryptography/hazmat/primitives/kdf/pbkdf2.py", line 28, in __init__
    backend = _get_backend(backend)
File "/var/task/cryptography/hazmat/backends/__init__.py", line 23, in _get_backend
    return default_backend()
File "/var/task/cryptography/hazmat/backends/__init__.py", line 14, in default_backend
    from cryptography.hazmat.backends.openssl.backend import backend
File "/var/task/cryptography/hazmat/backends/openssl/__init__.py", line 6, in <module>
    from cryptography.hazmat.backends.openssl.backend import backend
File "/var/task/cryptography/hazmat/backends/openssl/backend.py", line 113, in <module>
    from cryptography.hazmat.bindings.openssl import binding
File "/var/task/cryptography/hazmat/bindings/openssl/binding.py", line 14, in <module>
    from cryptography.hazmat.bindings._openssl import ffi, lib
ImportError: /var/task/cryptography/hazmat/bindings/_openssl.abi3.so: cannot open shared object file: No such file or directory

However, the library files referenced do exist and they are in the exact paths indicated.

The app includes cryptography==3.4.7 in the requirements.txt as a dependency. Serverless then installs the packages while deploying to AWS with sls deploy. Serverless puts everything in a zip and uploads it to AWS. I can see all the files in this zip folder as expected.

I thought that it might be an issue with serverless incorrectly uploading or installing the packages when deploying, so I even tried including the cryptography folder directly in my project. However, despite any changes to the serverless configuration or the cryptography package itself, I have been unsuccessful in using this package on my deployed Lambda. Does anyone have any ideas what I could do to make this work?

3
  • You have to specify exactly how did you create your function or layer with these dependencies. Commented May 22, 2021 at 8:05
  • I have added some information on the deployment. Using sls deploy, the packages are all added to a zip folder with the app and uploaded. Any help is greatly appreciated! Commented May 22, 2021 at 16:07
  • See this answer Commented Feb 24, 2022 at 9:49

8 Answers 8

8

One recommendation Amazon presents is to use the "sam" tool to build the distribution by using a Docker container. However, in my situation I wasn't able to use docker in the build environment.

Amazon provides some other documentation on how to use pip to install requirements by passing very explicit command line flags to ensure the Lambda environment's version is downloaded:

pip install \
  --platform manylinux2014_x86_64 \
  --implementation cp \
  --python 3.9 \
  --only-binary=:all: --upgrade \
  --target=build/package \
  cryptography==38.0.3

The "build/package" path will cause the dependencies to be downloaded and installed into that directory, to allow for easy zip for upload into a Lambda.

These flags can also be used if you have a setup.cfg or pyproject.toml file by using "." as the resource to load, rather than the explicitly named library.

I expect that as Amazon introduces new runtime environments and deprecates older ones, the given --platform and --python flags will need to change.

Sign up to request clarification or add additional context in comments.

1 Comment

I was able to get this to work based on the above, except in my case, I had to specify that the library be placed in a "python" subdirectory per AWS Lambda Layer docs. Here are the commands I used before running cdk deploy: rm -rf ./layers/cryptographyFolder; mkdir -p ./layers/cryptographyFolder/python followed by: pip3 install --platform manylinux2014_x86_64 --implementation cp --python 3.9 --only-binary=:all: --upgrade --target=./layers/cryptographyFolder/python cryptography==38.0.3
5

What I did to fix a similar problem, while trying to add a layer with the cryptography library to a lambda function, was to use the same runtime and processor architecture, in both the lambda function and layer.

For example, my problem was that I had a lambda function running with Python 3.9, and in a arm64 architecture. But I was creating the layer .zip file running python 3.8, and in a x86_64 architecture.

Don't ask me why it was easier for me to re-create the lambda in Python 3.8 and in x86_64, rather than the other way around.

Anyhow, as soon as I added the layer (with the cryptography library) to the lambda, it ran smoothly.

So, my theory is that you need to match both the runtime and architecture in order for a layer to work properly with a lambda function.

Additionally, now that I look up my solution, it is actually backed up by this article: https://docs.aws.amazon.com/lambda/latest/dg/invocation-layers.html

(see the notes in it)

Comments

3

I had a similar problem before that was resolved by running the deployment command from a linux machine. I use a mac for development and I was trying to deploy my lambda function from my mac. However, when it was deployed some of the dependencies threw import errors.

From my experience, it was due to the operating system that packages the dependencies differently when it runs in a mac or a linux environment. Hence, try running the serverless deployment command from inside a linux machine to see if that works.

In my case, I set up a gitlab CI/CD pipeline to run the command inside the environment of gitlab pipeline and that resolved the problem.

1 Comment

I am deploying from a linux system currently. The only thing that I can think of that might make a difference is that my machine has an ARM processor. I will have to give it a go with a different setup. Thanks!
3

I had exactly the same problem when deploying lambdas to AWS from Mac with M1 processor. I fixed it by adding dockerRunCmdExtraArgs argument to the serverless config:

pythonRequirements:
  dockerizePip: 'non-linux'
  dockerImage: public.ecr.aws/sam/build-python3.9:latest
  dockerRunCmdExtraArgs: ['--platform', 'linux/amd64']

Comments

2

aws-layer-python 3-10 / cryptography

mkdir -p python/lib/python3.10/site-packages 
echo "cryptography" > requirements.txt 
sudo docker run -v "$PWD":/var/task "public.ecr.aws/sam/build-python3.10:latest" /bin/sh -c "pip install -r requirements.txt -t python/lib/python3.10/site-packages/; exit"
zip -r lambda_function.zip .

1 Comment

This is a very simple solution. Although you don't need to dockerize your file. You can let pip do most of the heavy lifting for you. repost.aws/knowledge-center/lambda-import-module-error-python
0

I had similar errors after migrating my Lambda functions from python 3.6 to python 3.9

I use an amazonlinux docker container for development, testing, and deployment (via serverless).

In cryptography's documentation, the installation steps for Linux are not as straightforward as in macOS as cryptography ships manylinux wheels (as of 2.0).

Here's what you could try:

  1. Upgrade pip and reinstall cryptography via pip again. or

  2. Compile cryptography yourself (you’ll need a C compiler, a Rust compiler, headers for Python (if you’re not using pypy), and headers for the OpenSSL and libffiInstall), these packages are redhat-rpm-config gcc libffi-devel python3-devel openssl-devel cargo, using your package manager and then run:

    pip install cryptography --no-binary cryptography

In cryptography's FAQ page, there's a section about AWS Lambda.

Comments

0

"Encountered the same issue on Mac M1 when deploying Lambda functions with Serverless Framework. Solved it by configuring dockerRunCmdExtraArgs with 'platform: linux/amd64' in the serverless config, along with setting dockerizePip: 'non-linux' and dockerImage: public.ecr.aws/sam/build-python3.9:latest."

1 Comment

This does not really answer the question. If you have a different question, you can ask it by clicking Ask Question. To get notified when this question gets new answers, you can follow this question. Once you have enough reputation, you can also add a bounty to draw more attention to this question. - From Review
0

Others have already mentioned this, but my case was also the issue of architecture (x86_64 vs. arm64). The Lambda runtime was for x86_64 (Intel 64-bit architecture), but the Lambda layer created was for arm64.

If you are using Docker to generate the Lambda layer (e.g. https://github.com/patrickm663/aws-lambda-layer-generator), you can specify the Lambda layer architecture version through platform keyword. In my case, I changed FROM python:3.9-slim to FROM --platform=linux/x86_64 python:3.9-slim in Dockerfile.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.