Skip to content
GitLab
Projects Groups Topics Snippets
  • /
  • Help
    • Help
    • Support
    • Community forum
    • Submit feedback
    • Contribute to GitLab
  • Register
  • Sign in
  • aports aports
  • Project information
    • Project information
    • Activity
    • Labels
    • Members
  • Repository
    • Repository
    • Files
    • Commits
    • Branches
    • Tags
    • Graph
    • Compare revisions
  • Issues 670
    • Issues 670
    • List
    • Boards
    • Service Desk
    • Milestones
  • Merge requests 290
    • Merge requests 290
  • CI/CD
    • CI/CD
    • Pipelines
    • Jobs
    • Schedules
  • Deployments
    • Deployments
    • Releases
  • Activity
  • Graph
  • Create a new issue
  • Jobs
  • Commits
  • Issue Boards
Collapse sidebar

Our ARM infrastructure is unreachable at the moment, so CI jobs will time-out and packages will not be updated until the servers are back.

  • alpinealpine
  • aportsaports
  • Issues
  • #12058
Closed
Open
Issue created Oct 30, 2020 by gatopeich@gatopeich

Performance degradation in Python 3.8 compared to the one from docker-library (python3.8:alpine3.12)

Migrating from docker-library the new python3 package bundled in Alpine3.12, an automated performance test shows a clear degradation.

The test consists in serializing complex JSON messages with orjson and sending them over plain TCP. The receiver side is outside of the container and always the same.

Performance degrades from ~200,000 messages per second with python3.7:alpine3.10 (or python3.8:alpine3.12) to ~180,000 with alpine:3.12 + apk add python3.

Profiling with py-spy all flame charts seem very similar. The hot spots in the code are:

  1. ~37% on TCP socket.sendall() (non-asyncio) => ~38% on the faster version
  2. ~30% on JSON serialization (with orjson) => 32% on the faster version

In both cases I used the same orjson 3.3.1 binary from manylinux wheel, as well as building different versions from scratch with same Rust version and commands.

Since the profile shapes are so similar in both cases, I am wondering if there is a general performance drop in the interpreter itself...

Edited Nov 06, 2020 by gatopeich
To upload designs, you'll need to enable LFS and have an admin enable hashed storage. More information
Assignee
Assign to
Time tracking