Tuesday, November 3, 2020

sudo docker run hello-world

 Hello from Docker!

This message shows that your installation appears to be working correctly.


To generate this message, Docker took the following steps:

 1. The Docker client contacted the Docker daemon.

 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.

    (amd64)

 3. The Docker daemon created a new container from that image which runs the

    executable that produces the output you are currently reading.

 4. The Docker daemon streamed that output to the Docker client, which sent it

    to your terminal.


To try something more ambitious, you can run an Ubuntu container with:

 $ docker run -it ubuntu bash


Share images, automate workflows, and more with a free Docker ID:

 https://hub.docker.com/


For more examples and ideas, visit:

 https://docs.docker.com/get-started/


Install Docker Engine on Ubuntu

 To get started with Docker Engine on Ubuntu, make sure you meet the prerequisites, then install Docker.

Prerequisites

OS requirements

To install Docker Engine, you need the 64-bit version of one of these Ubuntu versions:

  • Ubuntu Focal 20.04 (LTS)
  • Ubuntu Bionic 18.04 (LTS)
  • Ubuntu Xenial 16.04 (LTS)

Docker Engine is supported on x86_64 (or amd64), armhf, and arm64 architectures.

Uninstall old versions

Older versions of Docker were called dockerdocker.io, or docker-engine. If these are installed, uninstall them:

$ sudo apt-get remove docker docker-engine docker.io containerd runc

It’s OK if apt-get reports that none of these packages are installed.

The contents of /var/lib/docker/, including images, containers, volumes, and networks, are preserved. If you do not need to save your existing data, and want to start with a clean installation, refer to the uninstall Docker Engine section at the bottom of this page.

Supported storage drivers

Docker Engine on Ubuntu supports overlay2aufs and btrfs storage drivers.

Docker Engine uses the overlay2 storage driver by default. If you need to use aufs instead, you need to configure it manually. See use the AUFS storage driver

Installation methods

You can install Docker Engine in different ways, depending on your needs:

  • Most users set up Docker’s repositories and install from them, for ease of installation and upgrade tasks. This is the recommended approach.

  • Some users download the DEB package and install it manually and manage upgrades completely manually. This is useful in situations such as installing Docker on air-gapped systems with no access to the internet.

  • In testing and development environments, some users choose to use automated convenience scripts to install Docker.

Install using the repository

Before you install Docker Engine for the first time on a new host machine, you need to set up the Docker repository. Afterward, you can install and update Docker from the repository.

SET UP THE REPOSITORY

  1. Update the apt package index and install packages to allow apt to use a repository over HTTPS:

    $ sudo apt-get update
    
    $ sudo apt-get install \
        apt-transport-https \
        ca-certificates \
        curl \
        gnupg-agent \
        software-properties-common
    
  2. Add Docker’s official GPG key:

    $ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
    

    Verify that you now have the key with the fingerprint 9DC8 5822 9FC7 DD38 854A  E2D8 8D81 803C 0EBF CD88, by searching for the last 8 characters of the fingerprint.

    $ sudo apt-key fingerprint 0EBFCD88
    
    pub   rsa4096 2017-02-22 [SCEA]
          9DC8 5822 9FC7 DD38 854A  E2D8 8D81 803C 0EBF CD88
    uid           [ unknown] Docker Release (CE deb) <docker@docker.com>
    sub   rsa4096 2017-02-22 [S]
    
  3. Use the following command to set up the stable repository. To add the nightly or test repository, add the word nightly or test (or both) after the word stable in the commands below. Learn about nightly and test channels.

    Note: The lsb_release -cs sub-command below returns the name of your Ubuntu distribution, such as xenial. Sometimes, in a distribution like Linux Mint, you might need to change $(lsb_release -cs) to your parent Ubuntu distribution. For example, if you are using Linux Mint Tessa, you could use bionic. Docker does not offer any guarantees on untested and unsupported Ubuntu distributions.

    $ sudo add-apt-repository \
       "deb [arch=amd64] https://download.docker.com/linux/ubuntu \
       $(lsb_release -cs) \
       stable"
    

INSTALL DOCKER ENGINE

  1. Update the apt package index, and install the latest version of Docker Engine and containerd, or go to the next step to install a specific version:

     $ sudo apt-get update
     $ sudo apt-get install docker-ce docker-ce-cli containerd.io
    

    Got multiple Docker repositories?

    If you have multiple Docker repositories enabled, installing or updating without specifying a version in the apt-get install or apt-get update command always installs the highest possible version, which may not be appropriate for your stability needs.

  2. To install a specific version of Docker Engine, list the available versions in the repo, then select and install:

    a. List the versions available in your repo:

    $ apt-cache madison docker-ce
    
      docker-ce | 5:18.09.1~3-0~ubuntu-xenial | https://download.docker.com/linux/ubuntu  xenial/stable amd64 Packages
      docker-ce | 5:18.09.0~3-0~ubuntu-xenial | https://download.docker.com/linux/ubuntu  xenial/stable amd64 Packages
      docker-ce | 18.06.1~ce~3-0~ubuntu       | https://download.docker.com/linux/ubuntu  xenial/stable amd64 Packages
      docker-ce | 18.06.0~ce~3-0~ubuntu       | https://download.docker.com/linux/ubuntu  xenial/stable amd64 Packages
      ...
    

    b. Install a specific version using the version string from the second column, for example, 5:18.09.1~3-0~ubuntu-xenial.

    $ sudo apt-get install docker-ce=<VERSION_STRING> docker-ce-cli=<VERSION_STRING> containerd.io
    
  3. Verify that Docker Engine is installed correctly by running the hello-world image.

    $ sudo docker run hello-world
    

    This command downloads a test image and runs it in a container. When the container runs, it prints an informational message and exits.

Docker Engine is installed and running. The docker group is created but no users are added to it. You need to use sudo to run Docker commands. Continue to Linux postinstall to allow non-privileged users to run Docker commands and for other optional configuration steps.

UPGRADE DOCKER ENGINE

To upgrade Docker Engine, first run sudo apt-get update, then follow the installation instructions, choosing the new version you want to install.

Uninstall Docker Engine

  1. Uninstall the Docker Engine, CLI, and Containerd packages:

    $ sudo apt-get purge docker-ce docker-ce-cli containerd.io
    
  2. Images, containers, volumes, or customized configuration files on your host are not automatically removed. To delete all images, containers, and volumes:

    $ sudo rm -rf /var/lib/docker
    

You must delete any edited configuration files manually.


Monday, July 16, 2018

MySQL Docker

How to use this image

Start a mysql server instance

Starting a MySQL instance is simple:
$ docker run --name some-mysql -e MYSQL_ROOT_PASSWORD=my-secret-pw -d mysql:tag
... where some-mysql is the name you want to assign to your container, my-secret-pw is the password to be set for the MySQL root user and tag is the tag specifying the MySQL version you want. See the list above for relevant tags.

Connect to MySQL from an application in another Docker container

This image exposes the standard MySQL port (3306), so container linking makes the MySQL instance available to other application containers. Start your application container like this in order to link it to the MySQL container:
$ docker run --name some-app --link some-mysql:mysql -d application-that-uses-mysql

Connect to MySQL from the MySQL command line client

The following command starts another mysql container instance and runs the mysql command line client against your original mysql container, allowing you to execute SQL statements against your database instance:
$ docker run -it --link some-mysql:mysql --rm mysql sh -c 'exec mysql -h"$MYSQL_PORT_3306_TCP_ADDR" -P"$MYSQL_PORT_3306_TCP_PORT" -uroot -p"$MYSQL_ENV_MYSQL_ROOT_PASSWORD"'
... where some-mysql is the name of your original mysql container.
This image can also be used as a client for non-Docker or remote MySQL instances:
$ docker run -it --rm mysql mysql -hsome.mysql.host -usome-mysql-user -p
More information about the MySQL command line client can be found in the MySQL documentation

... via docker stack deploy or docker-compose

Example stack.yml for mysql:
# Use root/example as user/password credentials
version: '3.1'

services:

db:
image: mysql
command: --default-authentication-plugin=mysql_native_password
restart: always
environment:
MYSQL_ROOT_PASSWORD: example

adminer:
image: adminer
restart: always
ports:
- 8080:8080

Run docker stack deploy -c stack.yml mysql (or docker-compose -f stack.yml up), wait for it to initialize completely, and visit http://swarm-ip:8080, http://localhost:8080, or http://host-ip:8080 (as appropriate).

Container shell access and viewing MySQL logs

The docker exec command allows you to run commands inside a Docker container. The following command line will give you a bash shell inside your mysql container:
$ docker exec -it some-mysql bash
The MySQL Server log is available through Docker's container log:
$ docker logs some-mysql

Using a custom MySQL configuration file

The default configuration for MySQL can be found in /etc/mysql/my.cnf, which may !includedir additional directories such as /etc/mysql/conf.d or /etc/mysql/mysql.conf.d. Please inspect the relevant files and directories within the mysql image itself for more details.
If /my/custom/config-file.cnf is the path and name of your custom configuration file, you can start your mysql container like this (note that only the directory path of the custom config file is used in this command):
$ docker run --name some-mysql -v /my/custom:/etc/mysql/conf.d -e MYSQL_ROOT_PASSWORD=my-secret-pw -d mysql:tag
This will start a new container some-mysql where the MySQL instance uses the combined startup settings from /etc/mysql/my.cnf and /etc/mysql/conf.d/config-file.cnf, with settings from the latter taking precedence.
Note that users on host systems with SELinux enabled may see issues with this. The current workaround is to assign the relevant SELinux policy type to your new config file so that the container will be allowed to mount it:
$ chcon -Rt svirt_sandbox_file_t /my/custom

Configuration without a cnf file

Many configuration options can be passed as flags to mysqld. This will give you the flexibility to customize the container without needing a cnf file. For example, if you want to change the default encoding and collation for all tables to use UTF-8 (utf8mb4) just run the following:
$ docker run --name some-mysql -e MYSQL_ROOT_PASSWORD=my-secret-pw -d mysql:tag --character-set-server=utf8mb4 --collation-server=utf8mb4_unicode_ci
If you would like to see a complete list of available options, just run:
$ docker run -it --rm mysql:tag --verbose --help

Environment Variables

When you start the mysql image, you can adjust the configuration of the MySQL instance by passing one or more environment variables on the docker run command line. Do note that none of the variables below will have any effect if you start the container with a data directory that already contains a database: any pre-existing database will always be left untouched on container startup.
See also https://dev.mysql.com/doc/refman/5.7/en/environment-variables.html for documentation of environment variables which MySQL itself respects (especially variables like MYSQL_HOST, which is known to cause issues when used with this image).

MYSQL_ROOT_PASSWORD

This variable is mandatory and specifies the password that will be set for the MySQL root superuser account. In the above example, it was set to my-secret-pw.

MYSQL_DATABASE

This variable is optional and allows you to specify the name of a database to be created on image startup. If a user/password was supplied (see below) then that user will be granted superuser access (corresponding to GRANT ALL) to this database.

MYSQL_USER, MYSQL_PASSWORD

These variables are optional, used in conjunction to create a new user and to set that user's password. This user will be granted superuser permissions (see above) for the database specified by the MYSQL_DATABASE variable. Both variables are required for a user to be created.
Do note that there is no need to use this mechanism to create the root superuser, that user gets created by default with the password specified by the MYSQL_ROOT_PASSWORD variable.

MYSQL_ALLOW_EMPTY_PASSWORD

This is an optional variable. Set to yes to allow the container to be started with a blank password for the root user. NOTE: Setting this variable to yes is not recommended unless you really know what you are doing, since this will leave your MySQL instance completely unprotected, allowing anyone to gain complete superuser access.

MYSQL_RANDOM_ROOT_PASSWORD

This is an optional variable. Set to yes to generate a random initial password for the root user (using pwgen). The generated root password will be printed to stdout (GENERATED ROOT PASSWORD: .....).

MYSQL_ONETIME_PASSWORD

Sets root (not the user specified in MYSQL_USER!) user as expired once init is complete, forcing a password change on first login. NOTE: This feature is supported on MySQL 5.6+ only. Using this option on MySQL 5.5 will throw an appropriate error during initialization.

Docker Secrets

As an alternative to passing sensitive information via environment variables, _FILE may be appended to the previously listed environment variables, causing the initialization script to load the values for those variables from files present in the container. In particular, this can be used to load passwords from Docker secrets stored in /run/secrets/<secret_name> files. For example:
$ docker run --name some-mysql -e MYSQL_ROOT_PASSWORD_FILE=/run/secrets/mysql-root -d mysql:tag
Currently, this is only supported for MYSQL_ROOT_PASSWORD, MYSQL_ROOT_HOST, MYSQL_DATABASE, MYSQL_USER, and MYSQL_PASSWORD.

Initializing a fresh instance

When a container is started for the first time, a new database with the specified name will be created and initialized with the provided configuration variables. Furthermore, it will execute files with extensions .sh, .sql and .sql.gz that are found in /docker-entrypoint-initdb.d. Files will be executed in alphabetical order. You can easily populate your mysql services by mounting a SQL dump into that directory and provide custom images with contributed data. SQL files will be imported by default to the database specified by the MYSQL_DATABASE variable.

Caveats

Where to Store Data

Important note: There are several ways to store data used by applications that run in Docker containers. We encourage users of the mysql images to familiarize themselves with the options available, including:
  • Let Docker manage the storage of your database data by writing the database files to disk on the host system using its own internal volume management. This is the default and is easy and fairly transparent to the user. The downside is that the files may be hard to locate for tools and applications that run directly on the host system, i.e. outside containers.
  • Create a data directory on the host system (outside the container) and mount this to a directory visible from inside the container. This places the database files in a known location on the host system, and makes it easy for tools and applications on the host system to access the files. The downside is that the user needs to make sure that the directory exists, and that e.g. directory permissions and other security mechanisms on the host system are set up correctly.
The Docker documentation is a good starting point for understanding the different storage options and variations, and there are multiple blogs and forum postings that discuss and give advice in this area. We will simply show the basic procedure here for the latter option above:
  1. Create a data directory on a suitable volume on your host system, e.g. /my/own/datadir.
  2. Start your mysql container like this:
    $ docker run --name some-mysql -v /my/own/datadir:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=my-secret-pw -d mysql:tag
The -v /my/own/datadir:/var/lib/mysql part of the command mounts the /my/own/datadir directory from the underlying host system as /var/lib/mysql inside the container, where MySQL by default will write its data files.
Note that users on host systems with SELinux enabled may see issues with this. The current workaround is to assign the relevant SELinux policy type to the new data directory so that the container will be allowed to access it:
$ chcon -Rt svirt_sandbox_file_t /my/own/datadir

No connections until MySQL init completes

If there is no database initialized when the container starts, then a default database will be created. While this is the expected behavior, this means that it will not accept incoming connections until such initialization completes. This may cause issues when using automation tools, such as docker-compose, which start several containers simultaneously.
If the application you're trying to connect to MySQL does not handle MySQL downtime or waiting for MySQL to start gracefully, then a putting a connect-retry loop before the service starts might be necessary. For an example of such an implementation in the official images, see WordPress or Bonita.

Usage against an existing database

If you start your mysql container instance with a data directory that already contains a database (specifically, a mysql subdirectory), the $MYSQL_ROOT_PASSWORD variable should be omitted from the run command line; it will in any case be ignored, and the pre-existing database will not be changed in any way.

Creating database dumps

Most of the normal tools will work, although their usage might be a little convoluted in some cases to ensure they have access to the mysqld server. A simple way to ensure this is to use docker exec and run the tool from the same container, similar to the following:
$ docker exec some-mysql sh -c 'exec mysqldump --all-databases -uroot -p"$MYSQL_ROOT_PASSWORD"' > /some/path/on/your/host/all-databases.sql

License

View license information for the software contained in this image.
As with all Docker images, these likely also contain other software which may be under other licenses (such as Bash, etc from the base distribution, along with any direct or indirect dependencies of the primary software being contained).
Some additional license information which was able to be auto-detected might be found in the repo-info repository's mysql/ directory.
As for any pre-built image usage, it is the image user's responsibility to ensure that any use of this image complies with any relevant licenses for all software contained within.

Friday, July 6, 2018

Clone a image to my repo

docker tag oldimage tecknovice/newimage
docker login
docker push tecknovice/newimage

Copying files from Docker container to host

In order to copy a file from a container to the host, you can use the command
docker cp <containerId>:/file/path/within/container /host/path/target
Here's an example:
[jalal@goku scratch]$ sudo docker cp goofy_roentgen:/out_read.jpg .

Friday, June 29, 2018

Docker: Get Started, Part 5: Stacks

Prerequisites

  • Install Docker version 1.13 or higher.
  • Get Docker Compose as described in Part 3 prerequisites.
  • Get Docker Machine as described in Part 4 prerequisites.
  • Read the orientation in Part 1.
  • Learn how to create containers in Part 2.
  • Make sure you have published the friendlyhello image you created by pushing it to a registry. We use that shared image here.
  • Be sure your image works as a deployed container. Run this command, slotting in your info for username, repo, and tag: docker run -p 80:80 username/repo:tag, then visit http://localhost/.
  • Have a copy of your docker-compose.yml from Part 3 handy.
  • Make sure that the machines you set up in part 4 are running and ready. Run docker-machine ls to verify this. If the machines are stopped, run docker-machine start myvm1 to boot the manager, followed by docker-machine start myvm2 to boot the worker.
  • Have the swarm you created in part 4 running and ready. Run docker-machine ssh myvm1 "docker node ls" to verify this. If the swarm is up, both nodes report a ready status. If not, reinitialize the swarm and join the worker as described in Set up your swarm.

Introduction

In part 4, you learned how to set up a swarm, which is a cluster of machines running Docker, and deployed an application to it, with containers running in concert on multiple machines.
Here in part 5, you reach the top of the hierarchy of distributed applications: the stack. A stack is a group of interrelated services that share dependencies, and can be orchestrated and scaled together. A single stack is capable of defining and coordinating the functionality of an entire application (though very complex applications may want to use multiple stacks).
Some good news is, you have technically been working with stacks since part 3, when you created a Compose file and used docker stack deploy. But that was a single service stack running on a single host, which is not usually what takes place in production. Here, you can take what you’ve learned, make multiple services relate to each other, and run them on multiple machines.
You’re doing great, this is the home stretch!

Add a new service and redeploy

It’s easy to add services to our docker-compose.yml file. First, let’s add a free visualizer service that lets us look at how our swarm is scheduling containers.
  1. Open up docker-compose.yml in an editor and replace its contents with the following. Be sure to replace username/repo:tag with your image details.
    version: "3"
    services:
    web:
    # replace username/repo:tag with your name and image details
    image: username/repo:tag
    deploy:
    replicas: 5
    restart_policy:
    condition: on-failure
    resources:
    limits:
    cpus: "0.1"
    memory: 50M
    ports:
    - "80:80"
    networks:
    - webnet
    visualizer:
    image: dockersamples/visualizer:stable
    ports:
    - "8080:8080"
    volumes:
    - "/var/run/docker.sock:/var/run/docker.sock"
    deploy:
    placement:
    constraints: [node.role == manager]
    networks:
    - webnet
    networks:
    webnet:
    The only thing new here is the peer service to web, named visualizer. Notice two new things here: a volumes key, giving the visualizer access to the host’s socket file for Docker, and a placement key, ensuring that this service only ever runs on a swarm manager -- never a worker. That’s because this container, built from an open source project created by Docker, displays Docker services running on a swarm in a diagram.
    We talk more about placement constraints and volumes in a moment.
  2. Make sure your shell is configured to talk to myvm1 (full examples are here).
    • Run docker-machine ls to list machines and make sure you are connected to myvm1, as indicated by an asterisk next to it.
    • If needed, re-run docker-machine env myvm1, then run the given command to configure the shell.
      On Mac or Linux the command is:
      eval $(docker-machine env myvm1)
      On Windows the command is:
      & "C:\Program Files\Docker\Docker\Resources\bin\docker-machine.exe" env myvm1 | Invoke-Expression
  3. Re-run the docker stack deploy command on the manager, and whatever services need updating are updated:
    $ docker stack deploy -c docker-compose.yml getstartedlab
    Updating service getstartedlab_web (id: angi1bf5e4to03qu9f93trnxm)
    Creating service getstartedlab_visualizer (id: l9mnwkeq2jiononb5ihz9u7a4)
  4. Take a look at the visualizer.
    You saw in the Compose file that visualizer runs on port 8080. Get the IP address of one of your nodes by running docker-machine ls. Go to either IP address at port 8080 and you can see the visualizer running:
    Visualizer screenshot
    The single copy of visualizer is running on the manager as you expect, and the 5 instances of web are spread out across the swarm. You can corroborate this visualization by running docker stack ps <stack>:
    docker stack ps getstartedlab
    The visualizer is a standalone service that can run in any app that includes it in the stack. It doesn’t depend on anything else. Now let’s create a service that does have a dependency: the Redis service that provides a visitor counter.

Persist the data

Let’s go through the same workflow once more to add a Redis database for storing app data.
  1. Save this new docker-compose.yml file, which finally adds a Redis service. Be sure to replace username/repo:tag with your image details.
    version: "3"
    services:
    web:
    # replace username/repo:tag with your name and image details
    image: username/repo:tag
    deploy:
    replicas: 5
    restart_policy:
    condition: on-failure
    resources:
    limits:
    cpus: "0.1"
    memory: 50M
    ports:
    - "80:80"
    networks:
    - webnet
    visualizer:
    image: dockersamples/visualizer:stable
    ports:
    - "8080:8080"
    volumes:
    - "/var/run/docker.sock:/var/run/docker.sock"
    deploy:
    placement:
    constraints: [node.role == manager]
    networks:
    - webnet
    redis:
    image: redis
    ports:
    - "6379:6379"
    volumes:
    - "/home/docker/data:/data"
    deploy:
    placement:
    constraints: [node.role == manager]
    command: redis-server --appendonly yes
    networks:
    - webnet
    networks:
    webnet:
    Redis has an official image in the Docker library and has been granted the short image name of just redis, so no username/repo notation here. The Redis port, 6379, has been pre-configured by Redis to be exposed from the container to the host, and here in our Compose file we expose it from the host to the world, so you can actually enter the IP for any of your nodes into Redis Desktop Manager and manage this Redis instance, if you so choose.
    Most importantly, there are a couple of things in the redis specification that make data persist between deployments of this stack:
    • redis always runs on the manager, so it’s always using the same filesystem.
    • redis accesses an arbitrary directory in the host’s file system as /data inside the container, which is where Redis stores data.
    Together, this is creating a “source of truth” in your host’s physical filesystem for the Redis data. Without this, Redis would store its data in /data inside the container’s filesystem, which would get wiped out if that container were ever redeployed.
    This source of truth has two components:
    • The placement constraint you put on the Redis service, ensuring that it always uses the same host.
    • The volume you created that lets the container access ./data (on the host) as /data (inside the Redis container). While containers come and go, the files stored on ./data on the specified host persists, enabling continuity.
    You are ready to deploy your new Redis-using stack.
  2. Create a ./data directory on the manager:
    docker-machine ssh myvm1 "mkdir ./data"
  3. Make sure your shell is configured to talk to myvm1 (full examples are here).
    • Run docker-machine ls to list machines and make sure you are connected to myvm1, as indicated by an asterisk next it.
    • If needed, re-run docker-machine env myvm1, then run the given command to configure the shell.
      On Mac or Linux the command is:
      eval $(docker-machine env myvm1)
      On Windows the command is:
      & "C:\Program Files\Docker\Docker\Resources\bin\docker-machine.exe" env myvm1 | Invoke-Expression
  4. Run docker stack deploy one more time.
    $ docker stack deploy -c docker-compose.yml getstartedlab
  5. Run docker service ls to verify that the three services are running as expected.
    $ docker service ls
    ID NAME MODE REPLICAS IMAGE PORTS
    x7uij6xb4foj getstartedlab_redis replicated 1/1 redis:latest *:6379->6379/tcp
    n5rvhm52ykq7 getstartedlab_visualizer replicated 1/1 dockersamples/visualizer:stable *:8080->8080/tcp
    mifd433bti1d getstartedlab_web replicated 5/5 gordon/getstarted:latest *:80->80/tcp

  6. Check the web page at one of your nodes, such as http://192.168.99.101, and take a look at the results of the visitor counter, which is now live and storing information on Redis.
    Hello World in browser with Redis
    Also, check the visualizer at port 8080 on either node’s IP address, and notice see the redis service running along with the web and visualizer services.
    Visualizer with redis screenshot

Howto modify an docker image that was created from an existing one

Based on a howto, i created a new image based on an existing one.
now i don´t have a Dockerfile and there are things happening, when the container starts
I cannot change - that´s how it looks to me.
Is there a way to modify things that have been setup in the Dockerfile of the base image I have used ?
for example: the container runs a bash script when it starts, i want to change this.
 
----------------
To answer your specific q: "the container runs a bash script when it starts, i want to change this". Let's assume you want to run /script.sh (part of the image) instead of the default, you can instantiate a container using:
docker run --entrypoint /script.sh repo/image
If script.sh isn't part of the image and/or you prefer not having to specify it explicitly each time with --entrypoint as above, you can prepare an image that contains and runs your own script.sh:
  1. Create an empty directory and copy or create script.sh in it
  2. Create Dockerfile with following content:
    FROM repo/image
    ADD script.sh /
    ENTRYPOINT /script.sh
  3. docker build -t="myimage" .
  4. docker run myimage
Notes:
  • When running the container (step 4), it's no longer necessary to specify --entrypoint since we have it defaulted in the Dockerfile.
  • It's really that simple; no need to sign up to docker hub or any such thing (although it's of course recommended in time ;-)
 

Manage Docker as a non-root user

  The Docker daemon binds to a Unix socket instead of a TCP port. By default that Unix socket is owned by the user   root   and other users ...