Showing posts with label Docker. Show all posts
Showing posts with label Docker. Show all posts

Monday, August 21, 2023

在 Docker 中使用 Koko 工具創建網絡連接的示例

在 Docker 中使用 Koko 工具創建網絡連接的示例

在 Docker 環境中,經常需要創建不同容器之間的網絡連接以及配置網絡參數。Koko 是一個工具,可以幫助我們在 Docker 容器中設置網絡連接。本文將介紹如何使用 Koko 工具在 Docker 容器中創建網絡連接,並顯示一些示例命令。

1.創建獨立的網絡連接

首先,我們可以使用以下命令在兩個 Ubuntu 容器之間創建獨立的網絡連接:

sudo docker run --network none -dt --name ubuntu1 ubuntu:bionic /bin/bash
sudo docker run --network none -dt --name ubuntu2 ubuntu:bionic /bin/bash

這將在兩個容器之間創建一個隔離的網絡環境,使它們能夠相互通信。

2.在容器中創建veth對

接下來,我們可以使用以下命令容器在主機和主機之間創建 veth 對:

sudo ./koko -c net1 -d k8s_helloworld-python_helloworld-python_default_409cdca7-6c2a-45bf-a885-a5e6794f34c1_22,net1,10.200.0.2/24
sudo ./koko -c net2 -d k8s_helloworld-python-access-pod_helloworld-python-access-pod_default_36bf5089-8791-44dc-aa0c-40d1bd09a8c4_22,net1,10.200.0.3/24

接下來的命令將在容器和主機之間創建兩個 veth 對 net1 和 net2,並為它們分配 IP 地址。

3.創建橋接網絡

我們可以通過以下步驟創建一個橋接網絡放置veth對連接到橋接接口:

sudo ip l a br0 type bridge
sudo ip l s br0 up
sudo ip l s net1 master br0
sudo ip l s net2 master br0

這將創建一個名為 br0 的橋接接口,把 net1 和 net2 連接到這個橋接接口上。

4. 在命名空間中顯示IP地址

要顯示容器中的IP地址,我們可以使用以下命令:

sudo docker exec -it k8s_micro-apm_micro-apm-qj76q_kube-system_5bd48be1-5530-4811-a546-1d4a688343a2_3 ip addr | grep -P "^\d|inet "
sudo docker exec -it k8s_helloworld-python_helloworld-python_default_409cdca7-6c2a-45bf-a885-a5e6794f34c1_22 ip addr | grep -P "^\d|inet "
sudo docker exec -it k8s_helloworld-python-access-pod_helloworld-python-access-pod_default_36bf5089-8791-44dc-aa0c-40d1bd09a8c4_22 ip addr | grep -P "^\d|inet "
sudo docker exec -it k8s_micro-apm_micro-apm-qj76q_kube-system_5bd48be1-5530-4811-a546-1d4a688343a2_3 ping -c5 10.200.0.2

這些命令將顯示容器中的 IP 地址,並且最後一條命令還會在容器之間執行 Ping 測試。

5.清理網絡連接

最後,我們可以使用以下命令來清理網絡連接:

sudo ./koko -D k8s_micro-apm_micro-apm-qj76q_kube-system_5bd48be1-5530-4811-a546-1d4a688343a2_3,net1
sudo ./koko -D k8s_helloworld-python_helloworld-python_default_409cdca7-6c2a-45bf-a885-a5e6794f34c1_22,net1
sudo ./koko -D k8s_helloworld-python-access-pod_helloworld-python-access-pod_default_36bf5089-8791-44dc-aa0c-40d1bd09a8c4_22,net1
sudo ip l d br0

在創建網絡連接和橋接接口之前將清理命令。

結論

通過這些示例命令,我們可以在 Docker 容器中使用 Koko 工具來創建、管理和清理網絡連接,滿足不同的網絡配置需求。希望本文能夠幫助您更好地理解如何在 Docker 環境中進行網絡設置。

Monday, September 20, 2021

Some Docker run arguments mapping to Kubernetes YAML


Some Docker run arguments mapping to Kubernetes YAML
For instance: 

docker run -ti --rm -v /lib/modules:/lib/modules --net=host --pid=host --privileged \ ubuntu:18.04 bash 

Mapping Table:

Monday, October 7, 2019

[Dockerfile] Some of the little skills used in Dockerfile

I collect some of the little skills used in my Dockerfiles and also keep in the record for my reference.

Wednesday, August 21, 2019

[Docker] Troubleshooting to docker private registry

I create a private docker registry as follows and sometimes it cannot reply the http request.
$ sudo docker run -p 7443:7443 --restart=always \
  -v /raid/registry2:/var/lib/registry \
  -e REGISTRY_HTTP_ADDR=0.0.0.0:7443 \
  --name registry registry:2

$ curl -v http://192.168.10.10:7443/v2/_catalog
*   Trying 192.168.10.10...
* TCP_NODELAY set
* Connected to 192.168.10.10 (192.168.10.10) port 7443 (#0)
> GET /v2/_catalog HTTP/1.1
> Host: 192.168.0.109:7443
> User-Agent: curl/7.58.0
> Accept: */*
...(hang)...

Tuesday, May 28, 2019

[Docker] Using GUI with Docker

Recently I need to run my GUI application with Docker and it can show up either directly on my desktop operating system or in my ssh terminal client via X11.

Basically, there are some people who already provide the solution for the cases. I just list the reference and quickly give my examples.

Wednesday, May 11, 2016

[Docker] the first experience with building docker image

It is essential to look for the docker image first if you need some services or functions running on docker container. But, once you want to customize it, you probably need to build your own docker image. The official document gives you a very complete description for you to do so. Please refer to this https://docs.docker.com/engine/userguide/containers/dockerimages/
The following command list are my steps to build a customized Drupal docker image.
The are two ways to build your own image:

1. Updating and committing an image

First, it would be better to have a Docker Hub account like this:


Second, to create a repository for your docker image.


If it's done, you can see this:



So, we can continue to the next step.
# Download the offical Drupal docker image
$ docker search drupal
$ docker pull drupal
$ docker images

# Create a container and update it ( be aware of the follwing parameters )
$ docker run -i -t --name danny_drupal -p 8000:80 drupal /bin/bash
  -i, --interactive               Keep STDIN open even if not attached
  -t, --tty                       Allocate a pseudo-TTY
  -p, --publish=[]                Publish a container's port(s) to the host

# From now on, you can go anything for your container
root@2a0849519c71:/var/www/html# apt-get update
root@2a0849519c71:/var/www/html# apt-get install openssh-server cloud-init -y
root@2a0849519c71:/var/www/html# exit

# To commit my changes
$  docker commit -m "Added my services" -a "teyenliu" \
danny_drupal teyenliu/drupal:v1

# Have to login docker before you push your change
$ docker login --username=teyenliu
$ docker push teyenliu/drupal

# Now it is successful to push your own drupal image and you also can see it on docker hub:


# To test your own drupal image:
$ docker run --name danny_drupal -p 8000:80 -d teyenliu/drupal:v1

# To check if the container is running
$ docker ps

# Open your browser with http://127.0.0.1:8000



2. Buidling an image from Dockerfile

$ vim Dockerfile
# This is a comment
FROM drupal:latest
MAINTAINER TeYen Liu <teyen.liu@gmail.com>
RUN apt-get update && apt-get install -y git
RUN apt-get install -y openssh-server
RUN apt-get install -y cloud-init
$ docker build -t teyenliu/drupal:v2 .
$ docker push teyenliu/drupal:v2

# The drupal repository will append the image of tag:v2




P.S: If you want to put the docker image to OpenStack Glance for further using, here is an example command:
$ docker save teyenliu/drupal | glance image-create --container-format docker --disk-format raw --name teyenliu/druapl

Friday, August 7, 2015

[Docker] What is the difference between Docker and LXC?

As we know Docker is a hot topic in recent cloud conference or summit, for instance, OpenStack. Google also announced that they will donate its open source project "Kubernets" and integrate it into OpenStack. The news definitely cheers up a lot of OpenStackers. For me, I use LXC before, but don't know too much about Docker. So, I am curious the difference between Docker and LXC. Here it is:
https://www.flockport.com/lxc-vs-docker/
This content in the URL gives me the answer for my question.
http://stackoverflow.com/questions/17989306/what-does-docker-add-to-lxc-tools-the-userspace-lxc-tools
Docker is not a replacement for lxc. "lxc" refers to capabilities of the linux kernel (specifically namespaces and control groups) which allow sandboxing processes from one another, and controlling their resource allocations.
On top of this low-level foundation of kernel features, Docker offers a high-level tool with several powerful functionalities:
  • Portable deployment across machines. Docker defines a format for bundling an application and all its dependencies into a single object which can be transferred to any docker-enabled machine, and executed there with the guarantee that the execution environment exposed to the application will be the same. Lxc implements process sandboxing, which is an important pre-requisite for portable deployment, but that alone is not enough for portable deployment. If you sent me a copy of your application installed in a custom lxc configuration, it would almost certainly not run on my machine the way it does on yours, because it is tied to your machine's specific configuration: networking, storage, logging, distro, etc. Docker defines an abstraction for these machine-specific settings, so that the exact same docker container can run - unchanged - on many different machines, with many different configurations.
  • Application-centric. Docker is optimized for the deployment of applications, as opposed to machines. This is reflected in its API, user interface, design philosophy and documentation. By contrast, the lxc helper scripts focus on containers as lightweight machines - basically servers that boot faster and need less ram. We think there's more to containers than just that.
  • Automatic build. Docker includes a tool for developers to automatically assemble a container from their source code, with full control over application dependencies, build tools, packaging etc. They are free to use make, maven, chef, puppet, salt, debian packages, rpms, source tarballs, or any combination of the above, regardless of the configuration of the machines.
  • Versioning. Docker includes git-like capabilities for tracking successive versions of a container, inspecting the diff between versions, committing new versions, rolling back etc. The history also includes how a container was assembled and by whom, so you get full traceability from the production server all the way back to the upstream developer. Docker also implements incremental uploads and downloads, similar to "git pull", so new versions of a container can be transferred by only sending diffs.
  • Component re-use. Any container can be used as an "base image" to create more specialized components. This can be done manually or as part of an automated build. For example you can prepare the ideal python environment, and use it as a base for 10 different applications. Your ideal postgresql setup can be re-used for all your future projects. And so on.
  • Sharing. Docker has access to a public registry (https://registry.hub.docker.com/) where thousands of people have uploaded useful containers: anything from redis, couchdb, postgres to irc bouncers to rails app servers to hadoop to base images for various distros. The registry also includes an official "standard library" of useful containers maintained by the docker team. The registry itself is open-source, so anyone can deploy their own registry to store and transfer private containers, for internal server deployments for example.
  • Tool ecosystem. Docker defines an API for automating and customizing the creation and deployment of containers. There are a huge number of tools integrating with docker to extend its capabilities. PaaS-like deployment (Dokku, Deis, Flynn), multi-node orchestration (maestro, salt, mesos, openstack nova), management dashboards (docker-ui, openstack horizon, shipyard), configuration management (chef, puppet), continuous integration (jenkins, strider, travis), etc. Docker is rapidly establishing itself as the standard for container-based tooling.

Reference:
Docker in technical details ( Chinese Version)
http://www.cnblogs.com/feisky/p/4105739.html