Squeezing fat Machine Learning Applications into tiny AWS Lambda Functionspace

2020-05-05

AWS Lambda Limitations

As this post is being banged out, Amazon limits its AWS lambda serverless functions to 256MB code (uncompressed) and 512MB data (in the /tmp directory).

Update: There is now (June, 2020) a way to attach AWS EFS to Lambda to overcome some of these issues - albeit at additional cost https://aws.amazon.com/blogs/aws/new-a-shared-file-system-for-your-lambda-functions/. We'll cover this in a future post.

Machine learning libraries are notoriously bloated. And usually we don't care much. How many times do we just suck in 27MB of sklearn just so we can use train_test_split()?

The AWS limits are painful. We've experienced the 'More than 262144000 bytes' error many times. But as of some weeks ago now, these limits have dissappeared from the AWS Lambda limits page: https://docs.aws.amazon.com/lambda/latest/dg/gettingstarted-limits.html.

Does that mean these limits are going to be relaxed at some point in the near future?

In either case, there is a small workaround that has us dynamically loading libraries to get around this roadblock. This is also useful for getting around the AWS 5 layer maximum limit as well. This approach might also be useful for any large application that requires more than one combination of library behemoths.

We found the original idea for this trick on StackOverflow. There are a couple of other posts on this technique at https://keestalkstech.com/2019/04/aws-lambda-size-pil-tf-keras-numpy/ and https://medium.com/@mike.p.moritz/running-tensorflow-on-aws-lambda-using-serverless-5acf20e00033

Our approach is similar but different enough that it might be useful to people who also face this 'glass slipper is too small' for their princess of an application.

Basically, the idea is to first upload the zipped up libraries to AWS S3. Then insert the following code in your serverless function (here we use python) to download and upzip each zipped library into /tmp. Finally, adjust add the path to these unzipped libraries, in this example '/tmp/imports', to python's version of LD_LIBRARY_PATH. [ The extra '/python' is added to the path because AWS Lambda wants python served up from this subdirectory]



deps = [ZIPPED CODE BUCKET KEYS]
s3 = boto3.resource('s3')
import io
from io import BytesIO
for dep in deps:
	zip_obj = s3.Object(bucket_name=BUCKET, key=dep)
	buffer = BytesIO(zip_obj.get()["Body"].read())
	zipfile.ZipFile(buffer).extractall('/tmp/imports')
sys.path.append('/tmp/imports/python')

Unlike the other examples we've seen, this streaming download approach does not first download each library to a file before unzipping it. We tried that approach too, but some libraries were still too big [Looking at YOU Tensorflow/Keras. And olde Tensorflow 1.15 at that, 2.x being many times too big for Lambda] to fit both the zipped and unzipped file into 512MB.

If AWS does relax these constraints at some point [As we read Azure has done but have not verified], we will still need something like this dynamic library loader for our application - Automatic.ai - which collects dependencies of the components each intelligent user-built service requires and automatically loads them when the service launches. We can't just pig-out and load all possible combinations every time. But YMMV with your app.

Enjoy!