17

I'm creating a AWS Lambda function when I need to read an info on an API, create a CSV file, and upload them on a SFTP server.

I've installed paramiko on my venv, using Ubuntu on Windows, and the cffi module comes like a dependency, but when the code runs, I receive this error:

{
  "errorMessage": "Unable to import module 'python_handler': No module named '_cffi_backend'",
  "errorType": "Runtime.ImportModuleError"
}

Follow my code:

import paramiko
import requests
from random import randrange
from datetime import datetime
from datetime import timedelta
from requests.auth import HTTPBasicAuth


def lambda_handler(event, context):
    # Lambda function
    addZero = lambda x : '0' + str(x) if x < 10 else str(x)

    # Actual datetime
    ac_yr = datetime.utcnow().year
    ac_mo = datetime.utcnow().month
    ac_da = datetime.utcnow().day
    ac_hh = datetime.utcnow().hour
    ac_mi = datetime.utcnow().minute
    ac_se = datetime.utcnow().second

    # 24 hours ago
    ag_yr = (datetime.utcnow() - timedelta(hours=24)).year
    ag_mo = (datetime.utcnow() - timedelta(hours=24)).month
    ag_da = (datetime.utcnow() - timedelta(hours=24)).day
    ag_hh = (datetime.utcnow() - timedelta(hours=24)).hour
    ag_mi = (datetime.utcnow() - timedelta(hours=24)).minute
    ag_se = (datetime.utcnow() - timedelta(hours=24)).second

    # API Infos
    api_key = 'XYZ'
    page_id = 'XYZ'

    # Call API
    param = {
        'sort_order': 'asc',
        'from': '{}-{}-{}T{}:{}:{}.000Z'.format(ag_yr, addZero(ag_mo), addZero(ag_da), addZero(ag_hh), addZero(ag_mi), addZero(ag_se)),
        'to': '{}-{}-{}T{}:{}:{}.000Z'.format(ac_yr, addZero(ac_mo), addZero(ac_da), addZero(ac_hh), addZero(ac_mi), addZero(ac_se))
    }
    r = requests.get('https://api.unbounce.com/pages/{}/leads'.format(page_id), auth=HTTPBasicAuth(api_key, 'pass'), params=param)

    # If connection it's ok
    if r.status_code == 200:
        # If has any result
        if len(r.json()['leads']) > 0:
            cont = ''
            for lead in r.json()['leads']:
                cont += lead['form_data']['cpf'][0] + ','
                cont += lead['form_data']['nome_completo'][0] + ','
                cont += lead['form_data']['email'][0]
        else:
            return 'Não há resultados no momento'
    else:
        return 'Falha na conexão'

    # Write a CSV file
    f = open('my_csv.csv','w')
    f.write('PAC_ID, PAC_NAME, PAC_EMAIL\n')
    f.write(cont)
    f.close()

    transport = paramiko.Transport(('host-info', 22))
    transport.connect(None, 'user-info', 'password-info', None)

    sftp = paramiko.SFTPClient.from_transport(transport)
    sftp.chdir('Import')
    sftpclient.put('my_csv.csv')

    return 'OK'

Any idea how I can solve this?

10
  • Do you get this error when you run on AWS or on your private env? Commented Jul 24, 2019 at 18:45
  • @balderman, on AWS Commented Jul 24, 2019 at 19:05
  • Did you create a deployment package? Commented Jul 24, 2019 at 19:14
  • @balderman, yeah, without the sftp part, work's well! When I add pysftp or paramiko, the error occours. Commented Jul 24, 2019 at 19:15
  • So it is quite clear that paramiko (or paramiko dependency ) needs a dependency that you dont provide - isnt it? Commented Jul 24, 2019 at 19:18

9 Answers 9

21

You can also get the error No module named '_cffi_backend' when the runtime version of your AWS Lambda function is different than the python version used to create your Lambda Layer.

I received this error when I set the runtime of my Lambda Function to Python 3.10, but I installed the dependencies for my Lambda Layer in an environment that was running Python 3.8.

Sign up to request clarification or add additional context in comments.

2 Comments

confirm, I had to use 3.8 instead of 3.9. I didn't find the way to build it with 3.9 version yet. On 20 July 2023
If you're using instructions from OzNetNerd, you can replace the lambci image with the official ones from AWS. All you need to do is replace "lambci/lambda:build-python3.7" with "public.ecr.aws/sam/build-python3.9"
5

The simplest solution is to use the lambci/lambda container as described here. I wrote about it in some detail here.

In a nutshell, some Python packages (e.g cffi) have OS level dependencies. Because the OS you're packaging the code on is different to the one Lambda uses, the script fails to run.

1 Comment

As of July 2023, lambci containers only support up to Python 3.8. Making a note here that OzNetNerds instructions are still relevant, but the official aws builds should be used instead: gallery.ecr.aws/sam e.g. public.ecr.aws/sam/build-python3.10
4

Working on Ubuntu, I haven't the success with pysftp or paramiko to Lambda. So I created an EC2 instance (and after an VirtualBox with Amazon Linux 2) on my desktop and develop the same code, with the same libraries. And... that's works...

2 Comments

OP should use this Docker container - aws.amazon.com/premiumsupport/knowledge-center/… - see below answer (stackoverflow.com/a/64780521/6233477) for more info.
I don't think this answer realises just how correct it actually is - the version of the Lambda runtime needs to match the version of the build container too, at least as far as this particular library is concerned. (eg py39 runtime won't work when the build container is py38)
2

Ran into the same issue while creating a layer for snowflake-python-connector for arm64 using amazonlinux docker container.

About the snowflake-python-connector issue in case it helps someone - packaging the layer using docker on x86 arch gives the following error

{ "errorMessage": "Unable to import module 'lambda_function': /opt/python/lib/python3.9/site-packages/cryptography/hazmat/bindings/_rust.abi3.so: cannot open shared object file: No such file or directory", "errorType": "Runtime.ImportModuleError", "requestId": "07fc4b23-21c2-44e8-a6cd-7b918b84b9f9", "stackTrace": [] } 

And using arm64 throws the following error

File "/usr/lib/python2.6/site-packages/cffi/api.py", line 56, in __init__
    import _cffi_backend as backend
ImportError: No module named _cffi_backend

Creating the layer as mentioned in the link shared by OzNetNerd under the accepted solution comments worked for me. Posting the link here - https://repost.aws/knowledge-center/lambda-layer-simulated-docker

Comments

1

may be late, hopefully, will help someone in future. this answer is specific to "_cffi_backend.....". One other thing that may help is to know how you installed python. if did using brew, you will need to create a virtual env using python. see here:

https://docs.aws.amazon.com/lambda/latest/dg/lambda-python-how-to-create-deployment-package.html

i did this in a linux environment without downloading the wheel files that are compatible with various linux distros and it worked

Comments

1

Using serverless cli with serverless-python-requirements plugin, if you change the AWS Lambda python runtime version and publish, the plugin will use the same pip download cache which was using the old python version. (This cache is outside your git repo, so git clean doesn't help either.)

That would resulting in having _cffi_backend.cpython-310-x86_64-linux-gnu.so in your lambda, even though you expect to use cpython-311.

The fix is to run serverless requirements cleanCache to delete the pip download cache.

Comments

0

This looks like a common problem with many binary packages. I don't have a generic solution yet, but what you really have to do is to move (symlink) the binary file from where it is really installed to the root of your package.

I usually solve this with post-install scripts for my Lambdas. The ones using paramiko run something like:

ln -s venv/MY_LAMBDA/lib/python3.6/site-packages/_cffi_backend.cpython-36m-x86_64-linux-gnu.so MY_LAMBDA/
ln -s venv/MY_LAMBDA/lib/python3.6/site-packages/.libs_cffi_backend/libffi-XXXXXXXX.so.6.0.4 MY_LAMBDA/
ln -s venv/MY_LAMBDA/lib/python3.6/site-packages/nacl MY_LAMBDA/

You will have to identify the exact name of the libffi... file from your virtualenv. It might vary from the machine you compile the library on.

Comments

0

If you want to put ffi in lambda you can create an lambda layer by following those steps:

install the ffi in the python lambda runtime container

copy ffi binaries into the layer

attach the layer to the lambda.

based on the python runtime version you have to use yum or dnf to install ffi packages.

Comments

0

You need to create your Lambda with distribution packages that are compatible with the environment that the Lambda will run under. If your dev machine is running Windows, but the Lambda is running Linux, you'll end up creating a zip file that is compatible for your dev machine and not the Lambda environment.

When building a zip file for a Lambda you will need to know: the platform (OS and architecture), Python Version, and Python implementation. For most use cases the appropriate values are manylinux2014_x86_64, 3.13 (as of 2025), and cp (CPython) respectively.

Example of how to build a Lambda zip file for a package called mypkg.

mkdir build
cp -r mypkg build
python -m pip install               \
    --target build                  \
    --platform manylinux2014_x86_64 \ 
    --python-version 3.13           \
    --implementation cp             \
    --only-binary=:all:             \
    --no-compile                    \
    --requirement requirements.txt
cd build
zip -r ../my_lambda.zip .

For the most up-to-date recommendations for the value for platform check here. Available Python versions can be found here. If the above fails to build for a given package, you might need to fiddle with the platform tag to see what builds and what works in Lambda. A good example is psycopg[binary], for which you would need a platform of manylinux_2_17_x86_64 for version 3.x. That extra 2_17 refers to the minimum version (2.17) of glibc required. You can find out what version of glibc Lambda will be running by running ldd --version from the public.ecr.aws/lambda/python:3.13 docker image (you cannot use a platform tag with a higher value than this). You can find out more about the platform tag here

Of the four extra options included:

  • --target is self explanatory (it's where the packages will be installed)
  • --only-binary is required when specifying --platform, --python-version or --implementation. It avoids downloading source packages, and only downloads wheels. This is because pip cannot build packages for an environment that is different to the current host.
  • --no-compile avoids writing Python bytecode files (this is recommended by AWS)
  • --requirement (-r) is well known, and installs list of requirements from the given file.

Building Packages from Source / Docker

If you need to build a package from source, and the OS of your local dev machine differs from the OS the lambda will run under, then you will need the help of Docker (for different architectures you'll also need QEMU).

You will want to use the public.ecr.aws/lambda/python:3.13 docker image to build your zip file. This has the advantage of building the zip file in the environment that it will run under. This means that pip will automatically install the right distribution package for each requirement, and you'll be able to install source packages if required. Example Dockerfile (targeting Python 3.13):

FROM public.ecr.aws/lambda/python:3.13

RUN dnf install -y python3.13 python3.13-devel python3.13-pip gcc g++

WORKDIR /work
COPY requirements.txt .
RUN python3.13 -m pip install --target . -r requirements.txt
COPY mypkg mypkg
# we compile everything in this image, as we are confident that we are compiling for
# the right environment
RUN python3.13 -m compileall mypkg
RUN zip -r my-lambda.zip .

ENTRYPOINT ["/bin/bash", "-c", "exec \"$@\"", "entrypoint"]

And then a programmatic way to extract the file from the image.

docker run --rm -v "$PWD:/host" <image-tag> cp /work/my-lambda.zip /host

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.