Docker for Developers: Echo Servers

28.05.2021 - ?

Back to the base tools section.

Video section

Part 1:

Alternative video links:

Part 2:

Alternative video links:

Echo Servers

In the last part, Docker fundamentals were demonstrated (and, I hope, explained). It was mainly about Docker images and containers management, since such knowledge is essential (and crucial) for working with Docker. Now, it’s time to move on to more advanced stuff: how to use Docker in software development process. The process will be very simple, since the main focus should be on Docker, not on programming.

The goal is to show Docker workflow, during development of a simple HTTP “echo” server. Of course, the server is just an example - it could’ve been any other app (yet, considering my interests, it would be some server application). And the server will be very simple - which is good, since the main focus should be on Docker workflow, and not on solving programming problems. The goal will be achieved by starting and testing the mentioned server.

In the next section, the problem specification is presented. It should be clear, how the “echo” server is supposed to work, and how to check, whether the server conforms to its specification.

Then, a procedure to solve the presented problem will be introduced. In the real-life, when a server application is needed (and its specification is known), the whole technology stack (platform, programming language, libraries, protocols, etc.) is chosen/decided, and then it is simply implemented. In other words, regardless of the problem to solve, it is solved with one, chosen stack.

This is not the case here, though. The goal is not to solve the problem (provide the “echo” HTTP server - probably no one needs such a server!), but to show some practical Docker workflows. For this reason, several different tools will be used. It should be good to demonstrate usage of various tools in Docker containers. On the other side, also various workflows will be demonstrated, and both their similarities and differencies discussed. Various tools may require various workflows, but also Docker containers can be used in different ways. Anyone, who uses Docker, should be aware of this. Regardless of any differencies, though, for all cases (programming languages), the same procedure will be applied.

And then, the problem will be solved with two different languages/platforms (and toolsets): Python and Node.js. (BTW, the PHP solution will be also demonstrated, yet not in this material.)

Problem specification

What is this “echo” server, anyway?

In our case, the server should be running on the port 3000, and conform to the following, simple specification:

  • the GET / request should be served with the "ok" response,
  • the GET /echo/something request should be serverd with the "something" response.

The first request GET / can be regarded as a kind of ping, or heartbeat - just to check, whether the server works. The second request is the most important, and defines the way whole application should work. So, it should be a kind of “Hello, world!” application, for the servers world.

It would be possible to proceed, with such specification of the problem, yet it won’t be done, for one reason: you’re here to learn problems solving in holistic way. There’s one more question, which should be asked: is the presented server specification complete?

(In the video, viewers were given some time to answer the question. Besides, video can be always paused…)

The answer can be both yes and no - it depends, on who is asked the question.

If the question is asked a client, especially nontechnical one, someone who doesn’t know anything about software, programming, and communication protocols, the answer will be probably yes - the specification is complete. There’s nothing more, that should be done by the server.

And that’s correct - as long, as someone is not an IT specialist. The problem is, software engineers are specialist, and for this reason, they should be more precise, when it comes to such details. Let’s think about the server: how should it work, when different requests come? It is possible to send any valid HTTP request - for that matter, what would happen, when the server received POST /? Or GET /some/other/url?

Of course, all those details can be ignored here. The goal is to learn something about Docker workflow, not to actually implement the echo server. However, in real life, all such questions should be answered, if not by a nontechnical client, then by some engineer, responsible for implementation. Client can be an ignorant in this (technical) field. Engineer cannot.

If it was a real-life project, such questions should be answered by the team leader. (Or, the team leader should assign someone to specify all the missing details.) It can be done on basis of client’s needs, as well as of the environment, where the server is supposed to be working (deployed). The environment consists of other services, which can use some specific protocols, there may be some conventions, which should be followed, and so on.

For an engineer, the specification is quite vague, and more details should be specified. That’s engineer’s job, if the client can’t do it.

Testing echo server

And so we came to the next topic. Two topics, actually, and both are very, very broad. The first topic is testing (especially automated). It’s very important, and I’m still planning to make a separate materials about it, so I won’t focus (too much) on testing here.

There’s a different topic, though: goal(s). Some goals have been specified for our (quite simple) problem, but are those goals good? And what it means, that goal is good? What makes a goal good or bad? Well, that’s a very coaching-like issue, yet I believe that some fact should be noted here.

Goals may be very different, less and more challenging, yet it should be stated very clear: there are good goals, and there are bad goals. Of course, only good goals should be considered, and all bad goals should be avoided - that’s not a rocket science, is it? The point is, it’s not about morality, nor ethics - it’s all about something else.

A good goal is a goal, which can be achieved. In other words, it should be possible to achieve such a goal, with all the available means, within the assumed time. As simple as that, yet some clarification may be also useful, so here it is: I could say, that my goal is fly to the moon tomorrow. Well, this goal is very easy to verify: if I’m not on the moon tomorrow, I have failed. Taking into consideration the fact, that I can’t fly spaceships, and I have no spaceship, I won’t be able to achieve this goal. So, it’s a bad goal (at least for me). I shouldn’t set myself such goals (unless I want to become a very frustrated person).

In general, we should set ourselves only such goals, which can be achieved - good goals.

That’s not all, though. A good goal should be not only achievable, but also verifiable. Flying to the moon may be not achievable, but it is very easy to verify. On the other hand, making someone happy may be achievable (I’m quite happy, when I’m served a tasty meal around 13:00), but may be also almost impossible to verify.

In our case, we’ll assume, that the goal for echo server is to provide the correct responses for both types of specified requests. Such a goal seems very easy to verify: any request can be sent to the server, and any response can be examined. When all the responses are correct, it means that the server works as intended, so the goal has been achieved. Sounds quite simple.

And here emerges a new question: how those conditions can be verified, actually? Well, it is possible to achieve with a simple tool, that should be already known, from previous episodes: curl. Using this console-based HTTP client, any request can be sent to the server, and the returned responses can be examined:

$ curl http://localhost:3000/
$ curl http://localhost:3000/echo/yeye

Just one simple command, and everything is clear. Well, two commands, actually.

There’s only one problem with such approach: both commands have to be typed manually, each time the server should be tested. Believe it or not, but such tests are performed very often, and manual testing will soon become a dull task. It is also prone to errors…

Well, the truth is, when I was preparing materials for this text, I was performing all those tests by hand, and it became very dull. So I wrote a simple test script, to automate the task. Now, I’d like to show you the script, to show you, how smart I was to write it. Yet, let’s stick to the official version, which is…

So, let’s do it as professionals do: with automated script. Accidentally, such script is available, and can be used to verify, whether the goal has been achieved.

The script is very simple, written in Bash, and will be run directly in the OS:


checkResponse ()
  local URL="$1"
  local EXPECTED="$2"
  local RESP=`curl "$URL"`
  if [ "$RESP" != "$EXPECTED" ]; then
    echo "Expected: $EXPECTED"
    echo "Received: $RESP"
    exit 1


checkResponse "$URL/" "ok"
checkResponse "$URL/echo/haha" "haha"

echo "All tests ok."

Why the script is written in Bash?

Well, there are many reasons for this:

  • First of all, Bash is available on most *nix systems, so it can be used there. It is also more powerful, than primitive shells, like /bin/sh. Using Bash means, that the script will work without any problems.

In a real-life project, some testing framework (or library) would be used. Not in this case, though:

  • All, what should be done, are two very simple tests: two HTTP requests, and two responses to check. For such simple task, framework is not needed.
    Moreover, using framework would make this task more complex - and it should be avoided.

  • As it was stated, the echo server will be implemented in various languages. So, if some framwork was to be used, it would have to be used in all those cases. In other words, the technology stack would become more complex, and probably more Docker containers would be needed during development.
    Of course, such situations happen in real-life projects. However, the focus of this tutorial should be on simple Docker workflow, not on such complex environments.

Getting back to the script, it actually starts with the server’s URL. It should be a constant, yet Bash is rather a primitive tool for programming (it’s not supposed to be used for programming!), so a variable is used. Then, there are two checks for server responses on simple requests - all with respect to the specification. When both tests are passed a simple message is printed. That’s all.

In fact, the whole script is composed of just two function calls, for the checkResponse function. And all this function does is checking, whether the server response is the same, as the expected one.

The checkResponse function starts with declaration of local variables. All the passed parameters are saved in them. The third variable, RESP, is set to the server’s response for the passed URL. When the response is not equal to the expected one, details are printed, and the script is finished (the exit 1 line) with an error status.

When the first test (the first checkResponse call) is not successfull, the whole script is finished with error status. When it is passed, then the second test is performed, in the same way.

All this testing is very simple, and it should work.

Base procedure

The problem specification is known. The goal is known. The way of testing/verification is also known. It’s time to think on how the problem can be actually solved.

For this purpose, the following procedure will be applied:

  1. At the beginning, a Docker image will be pulled (downloaded).
    The image should contain all the tools, needed to start the work. Please note that start: it’s not complete, but only start. It means, that the image with just an interpreter, and some package manager, should be just fine.
  2. Then, a Docker container, based on the pulled image, will be started…
  3. …in order to check/verify, whether the Docker image is suitable for the job.
    In other words: can a new project be started with the container? Can the development be made from the OS (and not only within container)? Can the developed project be run inside the container, when needed? All such questions should be answered.
  4. When everything works, as intended, the work on the actual echo server can be started.
    Of course, during the development process, the test script will be used.

Python solution

It’s time to get to the real stuff. We’ll start with the Python solution.

Here’s the list of what will be needed:

  • A Docker image for Python 3.9, based on Alpine Linux
    Why Python 3.9? Because that was the newest version available on the, when the video and audio clips were recorded. Besides, such a Python version may be probably not available in mainstream GNU/Linux distributions (I mean those serious and stable ones), so Docker image is needed in order to use it. (Well, it would be possible to compile this Python from source, alternatively. It would be also probably much more complex.)
  • pip and venv will be used to manage Python project. They are both standard tools, for Python.
  • The microframework FastAPI, and the uvicorn application server will be used to implement the echo server.
    Why FastAPI? Because it’s small, minimal, and it’s enough to solve the problem. Besides, it’s quite nice to work with.
  • Last, but not least, all the testing will be performed with the script, using curl.


Sticking to the procedure, the following steps should be performed, in order to get ready to work: (1) get the Docker image, then (2) start the container, and then (3) try to initialize a brand new Python project. Let’s get to the work:

$ mkdir tmp-to-delete
$ cd tmp-to-delete
$ docker pull python:3.9-alpine
$ docker images | grep python
$ /usr/src/app python:3.9-alpine /bin/sh

How do we know the directory is /usr/src/app? Because we are professionals, and we checked the Docker image docs, where this information is given.

Now, having a running container with Python3, let’s create a new project inside it, by initializing a new virtual environment, and activating it:

/usr/src/app $ python3 -m venv .
/usr/src/app $ ls
/usr/src/app $ . bin/activate

So far, so good. Let’s find out, whether we’ll be able to actually install some external library, and to use it. For that purpose, I chose the requests library. We are going to install it, and then to use it in order to make an HTTP request.

(app) /usr/src/app $ pip install requests
(app) /usr/src/app $ python3
>>> import requests
>>> r = requests.get('')
>>> r.status_code
>>> r.text

It seems to work fine inside the container, but can we do it with a script, that we write in our host filesystem? We should go ahead to the tmp-to-delete directory, in our host filesystem, and create a script there, with some sane text editor. In the script, all the commands executed inside the Python3 interpreter, should be placed.

import requests
r = requests.get('')

Inside the container, the script should be available. We can inspect its contents, and then try to run it (it should work fine, and provide the same results):

(app) /usr/src/app $ cat
(app) /usr/src/app $ python3

Now, the last thing to check is to run the test script, in our host system. Of course, since we have no working echo server yet, the script should give us the feedback, that nothing works as expected.

$ ./

After all those initial tests were performed, and all the results were satisfying, the container can be closed, and the temporary directory can be safely deleted (it’s not needed anymore). It’s time to get to the real work.

The first step: GET /

And what is the real work? Well, we’ll start with the following Python script:

from fastapi import FastAPI

app = FastAPI()

async def root():
  return {"message": "Hello World"}

The code was taken from the FastAPI tutorial, without any modifications (I was too lazy to do it!). Of course, it doesn’t do what is needed, yet it can be a good starting point: let’s have some working server, and then try to modify it, so it will conform to our desired specification.

What does the code do, by the way? It’s quite simple. At the beginning, we import the FastAPI class, from the fastapi module. BTW, the module is not available in the standard Python3 library, so it will be installed, just like the requests library was installed before.

Then, the app object is created, as an application to implement our echo server.

Only one route is defined - the root URL. It’s implemented as a simple function, which returns the {"message":"Hello World"} response. It could be named the server-side equivalent of the “Hello, world” script. By the way, the URL is defined for the HTTP GET method, which is not stated anywhere, but is assumed by default.

What should be done with the script? It should be placed inside our project’s directory, and that’s a problem, since we have no such thing so far. We have to create it, then. We know, that the image works fine, so we can start a development container, and create a new development Python3 environment inside it. Then, both the FastAPI and univorn should be installed:

$ 3000:3000 /usr/src/app python:3.9-alpine /bin/sh
/usr/src/app $ python3 -m venv .
/usr/src/app $ . bin/activate
(app) /usr/src/app $ pip install fastapi uvicorn

Now, it’s time to create a separate directory for our project:

(app) /usr/src/app $ mkdir project
(app) /usr/src/app $ cd project

We need the script inside the directory. It can be achieved in many different ways: the script can be made elsewere, and then simply copied; or a new file can be created with a text editor. The point is, it should be done with our host system, and not inside the container (which should be used only for one purpose: to execute the script).

Having the script in place, let’s start the application server:

(app) /usr/src/app/project $ uvicorn echo:app --reload

…and there’s a problem: the server is listening on the localhost port 8000, while we need it to be listening on all network interfaces, not only on the localhost, and the port should be 3000.

We have to stop the running server (Ctrl+c), and then to restart it using different options:

(app) /usr/src/app/project $ uvicorn --host --port 3000 echo:app --reload

Now, both the network interface, and the port seem to be fine. Let’s check, whether the server actually works, and is accessible from our host system, using curl:

$ curl localhost:3000

It seems to work fine. So, let’s run our test script - just for fun. Of course, the script should detect, that the running server doesn’t conform to the echo server specification.

$ ./

Let’s sum up: we have working development environment, and we have a skeleton of our echo server. Now, we need to implement both the request URLs, and we should be fine. Let’s proceed to the first request, GET /. It should be fine to just change the response to "ok":

from fastapi import FastAPI

app = FastAPI()

async def root():
  return "ok"
  #return {"message": "Hello World"}

Quite simple, isn’t it? Moreover, if you do it in the script, the uvicorn application server should be smart enough to detect, that the script was changed, and should reload it, and then restart the application server. So, all we have to do should be run the test script, one more time…

…and, as we all can see, it doesn’t work, as expected. It was supposed to be completely different: working server, flower carpets, girls cheering up at us, interviews in TV, we were supposed to be the code heores… And what we have instead, is the usual software crap.

More seriously, though, the test script result should be investigated in details. The problem is quite simple to spot: the expected response is ok, while the actual response was "ok". Such a difference may seem unimportant for a human beeing, yet it is very important for a computer system. And it should be.

What the difference comes from?

From my experience, I can say one thing: the script returns the "ok" string. It seems, that the response is sent in the JSON format, instead of plain text. Why? It seems, that FastAPI, by itself, converts our plain text string to the JSON format.

And, when you dive into FastAPI documentation (just remember: professionals should read documentation; the good news is that even they can swear sometimes at it, a little), you’ll find out, that when the value, returned from a response function, can be converted to the JSON format, the convertion will be done.

The point is, string is a simple data type, and can be converted to JSON.

Many WebAPIs returns data in JSON format, so such functionality is quite handy and convenient, in about 99% of cases.

Our case, though, is in this remaining 1%. We want to return the response in plain text, just as it is, without any conversion to JSON (or any other format). We have to modify the script:

from fastapi import FastAPI, Response

app = FastAPI()

async def root():
  return Response(content = "ok")
  #return "ok"
  #return {"message": "Hello World"}

It’s much better now. Well, the test script still shows an error, yet it’s a different one. The detected problem is not with the GET /echo/haha URL, not with the GET / one. It means, that our fix was successfull!

The second step: GET /echo/{message}

Now, it should be enough to implement the GET /echo URL.

from fastapi import FastAPI, Response

app = FastAPI()

async def root():
  return Response(content = "ok")
  #return "ok"
  #return {"message": "Hello World"}

async def echo(msg: str):
  return Response(content = msg)

The echo function is very similar to the root one. In the URL part (Python annotation), the msg variable is used. In response, the value of this variable is simply returned (in plain text). And that’s what the whole project is about.


The echo server works, as expected. That’s not a big deal, though. What is the most important here, are some remarks regarding working with Docker.

First of all, a Docker container was started. It provided a shell, where all the commands could be typed.

Then, we worked both inside the container, and outside of it. Inside the container, the development environment was created, and the development server was started. Then, the server simply worked: it was detecting changes, and realoading the script, when it was needed.

And that’s one way to work with Docker containers in development: just start a container, and use it, until it’s no longer needed (and then it should be simply finished).

Only one terminal session was used during development. If another session was needed, it could be created with the docker exec command.

After the work is done (either with the project, or for the day), the container is finished. All the files should be safe, in the host filesystem.

And one more remark. I haven’t demonstrated it, yet sometimes it is needed to do something with source files (eg. using git). In such cases, it should be done outside the container. Container should be used only for providing tools to compile, build, and sometimes run the software. Any other tools should be kept outside of Docker image.

As for the echo server, it works as intended: we have a server, we have a Docker container to run it in… The script also works: two use cases were used; once without the port, and once with the port. The test script also works, although there is some garbage in output. I don’t care about it. If you do, go ahead - modify it (it’s not that hard).

And one more bonus for you: start the application server, and check the following URLs in some browser:

  • localhost:3000/docs
  • localhost:3000/redoc

Those URLs are not defined in the script.

To be precise, you should:

  1. Start a Docker container.
  2. Activate the Python3 virtual development environment.
  3. Start the echo server.
  4. Check the URLs in a brower.

It’s a small bonus for using FastAPI.

TBC in the third part…