Zhixian's Tech Blog


Setting TextEdit as default Git editor

Filed under: Uncategorized — Tags: , , , , , , — Zhixian @ 09:41:03 am

This blog post describes how to set the default Git editor in MacOS to use the default text editor that comes with MacBook, TextEdit.


  1. Scenario
  2. Solution


By default, Git would use vi as the default text editor for commit messages.

Git commit
Using vi to add commit messages

However, you might prefer to use a text editor with a graphical interface to write your commit messages.


For a task like adding commit messages, we want to use a text editor that is as simple as possible that is fast to startup and close. On a MacBook, that editor is probably the text editor that comes out of the box, TextEdit.

To configure TextEdit as the default text editor for Git, type the following at the command-line:

git config –global core.editor “open -e -W -n”

Executing command to use TextEdit for Git

After you executed the command, the next time you do a Git commit, it you would use TextEdit.

TextEdit to edit commit messages


You might be wondering what’s the “open -e -W -n” about.

open is a command in MacOS used open files and applications.

-e tells the command to open files using TextEdit

-W tells the command to wait till TextEdit is closed.

-n tells the command to open a new instance of TextEdit


Adventures in Kali Linux (Xfce)

Filed under: computing — Tags: , , , — Zhixian @ 09:06:08 am

I recently have the inclination to try using Kali Linux as my development Linux OS of choice. This blog post is not so much as instructional blog post, but rather my journal on what I did after I installed Kali Linux. Some might question why use Kali Linux, well…its something to try. Its a learning experience in some ways. Anyways, here goes.

  1. Setup bluetooth

By default, it seems that Kali Linux does not have any bluetooth managers installed. So I chose to install blueman a fairly popular bluetooth manager. Commands that I used:

apt search blueman
apt-get install blueman
service bluetooth status
service bluetooth start
service bluetooth status

The service bluetooth status statements are just to check that bluetooth is working after I started the service.

  1. Create a normal user with sudo rights

I suddenly remember that I was logged on to the root account that has super-powers! So I hurriedly create a normal user account with sudo rights instead:

useradd -m zhixian
passwd zhixian
usermod -a -G sudo zhixian
chsh -s /bin/bash zhixian

  1. Install Chrome

To install Chrome, I went to the Chrome website and download the respective DEB file and install it by running:

apt-get install ./google-chrome-stable_current_amd64.deb

Then I remember, I should probably stop using the root account. Since I already have my user account setup. So I logged out to proceed with the rest of my Kali Linux installation. Actually, at this point I was feeling sleepy, so I put the system to Hibernate and went for a nap… 🙂

…(an hour later)

Only to wake up to find out that Kali Linux did not hibernate my laptop properly! Arghhh!
More specifically, it manages to save the session state and all but it did not power off the laptop. And what’s worse is that it seems to be doing some weird operation that kept my laptop heatup. Not sure what went wrong. Still haven’t gotten this fixed. Suspend and shutdown works well though. So I guess I will just have to live without hibernation for now.

  1. Enable blueman as a service

After I restarted Kali, I realise my bluetooth is not running on startup. So I ran:

sudo systemctl enable bluetooth

After leaving my Kali Linux alone for a week. I came back to it.
And found that I cannot surf Internet on it as the date-time on the machine is not accurate. As it turns out, NTP is not installed by default (so no automatic date-time synchronization). So I installed a NTP service and set it to start on startup.

sudo apt-get install ntp
sudo systemctl restart ntp
sudo systemctl enable ntp

Then I decide that I might want to use Gimp for image editing:

sudo apt-get install gimp

I toy around the idea of install Darktable as well but decide to hold off installing stuff that I’m not using immediately. Hence, all the drawing software like Inkscape, Dia, Krita are not installed as well (for now).

I then thought of checking if I have the common git UI tools, gitk and git-gui.
Nope, gitk does not exists. So:

sudo apt-get install gitk

Then I test git-gui.  Cool! Its installed now.
But I note that the command-line to activate it is slightly different compared to other platforms. On other platforms, the command to invoke git-gui is unsurprisingly “git-gui”.
But oddly on Linux, the command is “git gui” (without the dash”.
So to be consistent, I add an alias for this in my “.bash_aliases” file.

Some of you might be wondering, “Urgh! Why are you using gitk and git-gui? Surely there are better tools around? Heard of this product Sourcetree? Or maybe Kraken?”
LOL! True! I know these tools exists. I used to think the way too.

However, as I get more familiar with these 2 tools, I find that they meet my most of needs. True, other tools have additional helpful functions. But I think if you are using git right, you won’t be needing those “helpful” functions. Another reason to get familiar with gitk and git-gui is that they are not only free but are also widely available across platforms. It exists by default in the Windows / Mac OS distributions of git (I think). Which means you do not need to re-learn tools.

As a aside, tools are great! But sometimes, I do feel overwhelmed with learning all the different tools out there. Especially if they do not really contribute to me being more productive.

Let’s see. What else do I need to install. Oh! I might want to type Chinese occasionally. So I installed fcitx and google-pinyin keyboard:

sudo apt-get install fcitx
sudo apt-get install fcitx-googlepinyin

Then I decide I want to re-look at using Thunderbird. So:

sudo apt-get install thunderbird

Not sure, if I want to use a dedicate mail client (Thunderbird), so hold off the decision for now.

Then I went to install engrampa, a UI frontend for manage archives like zip files because there isn’t one installed by default. Initially, I wanted to install xarchiver as I read somewhere that xarchiver is the de factor file archiver manager for Xfce. But after trying it out, I find that engrampa provides a better user experience. So:

sudo apt-get install engrampa

Then I recall that I want to install telegram as well. Unfortunately, there’s no convenient apt package. Instead, I’ve to download the binaries off their website. Installation is simply unzipping the archive and placing the contents in a convenient location and adding shortcuts for easy access.

Now what’s left seems to be installing my various software development stuff.



Installing PowerShell Core

Filed under: computing, software, windows — Tags: , , , — Zhixian @ 23:42:01 pm

This blog post is a reminder to myself on the installation process for PowerShell Core.

What is PowerShell Core?

PowerShell Core is a version of PowerShell based on .NET Core.
The idea is to bring PowerShell across to platforms other than Windows (for example Linux, and macOS).

Images of PowerShell Core Installation (for Windows)

The installer for PowerShell Core can be found at https://github.com/PowerShell/PowerShell.

If you browse to this page, you’ll see the Windows Installer somewhere on the middle of the page.


After you clicked on the link, an installation file should be downloaded to your computer.
After the file has finish downloading, you should see the downloaded file which is named something like “PowerShell-<version>-<platform>.msi”.


Double-clicking the file will start the installation process.
After the installation process startup, you will see a screen like the below.
Click on the “Next” button to proceed with the installation process.


After you clicked on the next button, you will come to the “End-User License Agreement” dialog.
Checked on the “I accept the terms in the License Agreement” checkbox.
Click on the “Next” button to proceed with the next step of the installation process.


After you clicked on the “Next” button, you will see the “Destination Folder” dialog.
You can choose to change the installation location of the PowerShell Core, if you would like to install it to a location other than its default (which is “C:\Program Files\PowerShell\”).
After you set the location, clicked on the “Next” button to proceed with the next step of the installation process.


After you clicked on the next button, you will see the “Optional Actions” dialog.

Accept the default checked items, and clicked on the “Next” button to proceed.


After you clicked on the next button, you are finally ready to install PowerShell Core.
Click on the “Install” button to proceed with the installation.


After you clicked on the install button, the installation process will install PowerShell Core.


After the installation is complete, you will see a screen like the below.
Click on the “Finish” button to complete the installation process.


After closing the installation dialog, you can run PowerShell from Windows Start Menu by clicking on the item labeled “PowerShell 6 (x64)”.


This is bring up the PowerShell Core command-line shell.
Type “$PSVersionTable” on the command-line to see the version of the PowerShell that you are running.

Things to note are PSVersion and PSEdition.
PSEdition should read “Core” and PSVersion should report the version of PowerShell Core that you are running.


At this point, if you see a screen like the above, it means you have a running copy of PowerShell Core.


Fixing Open Live Writer HTML styles toolbar

Filed under: computing, software, windows — Tags: , , , — Zhixian @ 22:59:01 pm

This blog post relates to fixing HTML styles toolbar of a Windows blogging application call Open Live Writer.   

Disclaimer: This is not a foolproof or perfect fix. I only encounter this on a new installation in a new PC lately.
I do think this issue is highly related to the CSS theme used in the blog.
However, it remains unclear as to what is the actual cause of this problem. This issue has been reported in GitHub here.


After you installed Open Live Writer, you may notice that the HTML styles portion of the toolbar looks like the following:

open-live-writer corrupted HTML styles toolbar


There is no official solution for fixing this issue.
But you may be able to get rid of those blocks overlaying the HTML styles.

Disclaimer: This is not the correct solution. It merely provides a way to get rid of the blocks.


I assumed:

  1. You know how to run Windows File Explorer and know how to navigate to a folder location.
  2. You know how to download a zip file.
  3. You know how to extract files from a zip file.
  4. You know how to make backup copies of files.

Steps overview

  1. Download a dummy set of HTML style images from the zip file (Open-Live-Writer HTML Styles.zip) here.
    This is a set of HTML styles images based on the WordPress theme “Twenty Nineteen”.
  2. Go to  Open Live Writer’s blog templates folder (at %APPDATA%\OpenLiveWriter\blogtemplates)
    This folder contains multiple folders.
  3. In each folder, there would be a set of bitmap (BMP) images titled P, H1, H2,… to H6.
    Go through each folder until you find the set of images that matches what you see in your Open Live Writer toolbar.
  4. Make a backup copy of the images (in case, you don’t like this fix).
  5. Extract the images from zip file downloaded from step 1 and store them in this folder.
  6. Close Open Live Writer. You should see the changes when you restart Open Live Writer.

Steps details

1. Download a dummy set of HTML style images

I’m skipping this step.
I assume you know how to download a zip file from Internet.

2. Go to  Open Live Writer’s blog templates folder (at %APPDATA%\OpenLiveWriter\blogtemplates)

Open Windows FIle Explorer
Enter “%APPDATA%\OpenLiveWriter\blogtemplates” in the navigation bar.


You may see multiple folders in this folder.
Each folder represents a blog that you have registered with Open Live Writer.
So the following image implies that I have 3 blogs registered with Open Live Writer.


3. Going through each folder until I find the folder with images similar to what I see in the toolbar.

If you could not see the preview of the image files in the folder, try setting the view layout to “Large icons”.


4. Making backup of the images.

I’m skipping this step here.
I assumed you know how to do this.

5. Extract the images from zip file downloaded from step 1 and store them in this folder.

After replacing the image files, your folder should look something like the below:


6. Restart Open Live Writer,

After restarting Open Live Writer, your HTML styles toolbar should look something like the below image:

open-live-writer fixed HTML styles toolbar


Install .NET Core 2.2 on Ubuntu 18.04

Filed under: computing, ubuntu, Uncategorized — Tags: , , — Zhixian @ 09:40:12 am


This is a note to self about installing the .NET Core SDK 2.2 on Ubuntu (because the instructions on Microsoft website does not work / is incomplete).


When the .NET Core 2.2 SDK release was announced, I was keen to install it on my Ubuntu machine. So I went to their web site (https://dotnet.microsoft.com/download/linux-package-manager/ubuntu18-04/sdk-2.2.100) and and followed their instructions there.

The instructions are:

  1. Register Microsoft feed
  2. Install the .NET SDK

Which means executing the following commands at your command-line prompt:

wget -q https://packages.microsoft.com/config/ubuntu/18.04/packages-microsoft-prod.deb
sudo dpkg -i packages-microsoft-prod.deb
sudo add-apt-repository universe
sudo apt-get install apt-transport-https
sudo apt-get update
sudo apt-get install dotnet-sdk-2.2

Unfortunately, despite following the instructions, you may find that you still could not install the latest .NET Core SDK as the package could not be found when you execute the last command! 😦


To fix this issue, you need to add Microsoft’s repository:

sudo sh -c 'echo "deb [arch=amd64] https://packages.microsoft.com/repos/microsoft-ubuntu-bionic-prod bionic main" > /etc/apt/sources.list.d/dotnetdev.list'
sudo apt-get update

After update, you should be able install .NET Core 2.2 SDK by running the following command again.

sudo apt-get install dotnet-sdk-2.2


How to fix “topmenu-gtk-module” error in Ubuntu 18.04 LTS

Filed under: computing, ubuntu — Tags: , , , — Zhixian @ 10:09:06 am


This blog post is a quick note to myself explaining how I fix the ‘Failed to load module “topmenu-gtk-module’ error message from displaying.


Sometimes when launching an application in Linux you may come across an error message that reads:

Gtk-Message: 09:24:00.567: Failed to load module “topmenu-gtk-module”


You are most likely to see this error when you try to launch a desktop application from the command-line.

This error message appears because your operating system is probably missing required packages, specifically “topmenu-gtk3” or “topmenu-gtk2”.

However if you are on Ubuntu 18.04 LTS, you will find that you could not install these packages using “apt-get” command-line tool, simply because they are not available. The latest of these packages are only available on Xenial or Artful. 😦


During the upgrade from Xenial to Bionic, the installation process disables all other PPAs.
Here are a few examples:


While its possible to fix this issue by downloading and compiling the source files for these packages, being of a lazy nature I decide against to do that. Instead what I chose to do is intentionally add the Xenial package repository back into my list of “Software & Updates”.

deb http://sg.archive.ubuntu.com/ubuntu/ubuntu xenial main universe


After I added that back in, it should prompt you to update your list of packages.
If it did not, run:

sudo apt-get update

After the command-line have finish running, you can install “topmenu-gtk2” and/or “topmenu-gtk3”.

sudo apt-get install topmenu-gtk3

sudo apt-get install topmenu-gtk2

I tried install “topmenu-gtk3” first. But that did not get rid of the message. So I went on to install “topmenu-gtk2”.

After the packages finished installing, you should not see the error message when you run your desktop application from the command-line.


How to update your Kindle Touch firmware manually

This is a blog post describing how to update your Kindle Touch firmware manully.

The firmware is the software that runs on your Kindle Touch device. You might have to update the firmware if you did a factory reset and then found that you could not register your Kindle Touch any more. This is probably due to the system that was handling the registration of the Kindle Touch device is no longer available. You probably would see a screen like the below:


The instructions to transfer and install the software updates can be found here (as of 2018-05-27). The rest of this blog post is simply a more descriptive version of the instructions stated.

First, I note the version of the firmware that my Kindle Touch is using.
Assuming you are on the home page, this is done by clicking the menu button and select “Settings”.


On the Settings page, click on menu button and select “Device Info”.


A “Device Info” dialog would popup. You can see the version of the firmware that your Kindle Touch is using on the 2nd last line of the dialog content. As stated in my screendump below, my Kindle Touch device is currently using The latest version of the software as stated on the software updates download site here is (as of 2018-05-27).


To update the firmware, go to thesoftware updates download site and download the firmware to your computer. This is done by going the web page and clicking the “Software Update” link as shown below:


After you clicked the link, you should received a file named “update_kindle_5.3.7.3.bin”.
After the file is downloaded to your computer, connect your Kindle Touch to your PC.

After your PC has detected your Kindle Touch device, you should be able to open it using your file manager. Copy the the firmware update file to the root folder of the Kindle Touch device as follow:


After the file is copied, eject your Kindle Touch device from your computer.
You are now ready to apply the update to your Kindle Touch.
To apply the update, go to the Settings page as stated above when we wanted to check for the firmware version. Then on the Settings page, click on the menu button and select “Update Your Kindle” button.



A “Update Your Kindle” dialog would popup. Click on the “OK” button on the dialog to proceed with the update.


After you clicked the “OK” button, the device will restart to apply the update.
You may see the following screen.


Gradually, the update process will end.
And you will see that your Kindle Touch is updated to the version of the firmware update that you downloaded.

If you proceed to register your Kindle Touch, you should be successful this time round.

Hope this helps.


Using ACMESharp to get SSL certificates from Let’s Encrypt

This blog post is a reminder note to myself on how to use the ACMESharp PowerShell module to get SSL certificates from Let’s Encrypt CA.

Essentially, the usage can be divided into the following phases:

  1. Install ACMESharp PowerShell module
  2. Import ACMESharp PowerShell module
  3. Initial (one-time) setup
  4. Register DNS of certificate
  5. Get “challenge” details (to prove that you are the owner of the domain)
  6. Signal Let’s Encrypt to confirm your challenge answer
  7. Download certificates

Steps 1-3 is only for setting up on a new PC.
Step 2, 4 should be repeated for each domain that you want SSL certificates for.
Steps 2, 5-7 should be repeated whenever you want to get or renew certificate.

1. Install ACMESharp PowerShell module

Install-Module -Name ACMESharp -AllowClobber

2. Import ACMESharp PowerShell module

Import-Module ACMESharp


3. Initial (one-time) setup


New-ACMERegistration -Contacts mailto:zhixian@hotmail.com -AcceptTos

4.  Register DNS of certificate

New-ACMEIdentifier -Dns plato.emptool.com -Alias plato_dns

5. Get challenge (to prove that you are the owner of the domain)

Complete-ACMEChallenge plato_dns -ChallengeType http-01 -Handler manual

6. Signal Let’s Encrypt to confirm your challenge answer

Submit-ACMEChallenge plato_dns -ChallengeType http-01
(Update-ACMEIdentifier plato_dns -ChallengeType http-01).Challenges | Where-Object {$_.Type -eq “http-01”}
New-ACMECertificate plato_dns -Generate -Alias plato_cert1
Submit-ACMECertificate plato_cert1
Update-ACMECertificate plato_cert1

7. Download certificates


Get-ACMECertificate plato_cert1 -ExportCertificatePEM “C:\src\certs\plato_cert1.crt.pem”
Get-ACMECertificate plato_cert1 -ExportIssuerPEM “C:\src\certs\plato_cert1-issuer.crt.pem”

Add-Content -Value (Get-Content plato_cert1.crt.pem) -Path nginx.plato.emptool.com.pem
Add-Content -Value (Get-Content plato_cert1-issuer.crt.pem) -Path nginx.plato.emptool.com.pem


ZX: Generating SSL certificates for HAPROXY is similar to NGINX, except it includes a key.

Get-ACMECertificate plato_cert1 -ExportKeyPEM “C:\src\certs\plato_cert1.key.pem”
Get-ACMECertificate plato_cert1 -ExportCertificatePEM “C:\src\certs\plato_cert1.crt.pem”
Get-ACMECertificate plato_cert1 -ExportIssuerPEM “C:\src\certs\plato_cert1-issuer.crt.pem”

Add-Content -Value (Get-Content plato_cert1.crt.pem) -Path haproxy.plato.emptool.com.pem
Add-Content -Value (Get-Content plato_cert1-issuer.crt.pem) -Path haproxy.plato.emptool.com.pem
Add-Content -Value (Get-Content plato_cert1.key.pem) -Path haproxy.plato.emptool.com.pem



Get-ACMECertificate plato_cert1 -ExportPkcs12 “C:\src\certs\iis.plato_cert1.pfx”



How to deploy files to Windows using SFTP via Gitlab pipelines


This blog post describes how you would deploy files to a Windows Server via SFTP using Gitlab pipelines using shared runners.

The practical uptake for this is that you can deploy files for your website to be served by Internet Information Services (IIS) server using Gitlab pipelines.

Note: The context of this post is about deploying websites but the steps described can be used for deploying any type of file using Gitlab pipelines.


  1. Assumptions
  2. What are Gitlab pipelines
  3. How Gitlab pipelines work
  4. Sample .gitlab-ci.yml


  1. You have an working Gitlab account.
  2. You have a working Gitlab repository.
  3. You have a Windows Server
  4. You have a SFTP server running on your Windows Server and you have a working SFTP account for that server.

If you do not have a SFTP server, you can consider SFTP/SCP Server from SolarWinds.
Its not a fantastic product but it would have to do (considering that it is a free product)
The software is available at the following url after registration:

What are Gitlab pipelines

To put it simply, pipelines is Gitlab’s mechanism to perform tasks specified by you when you check-in files into your Gitlab repository. These tasks are executed by processes (dubbed "runners" in Gitlab terminology).

The runners can be grouped in shared and private (non-shared) runners.

Shared runners are hosted by Gitlab to be used by all users of Gitlab that wishes to use them). They are free to use but are limited to 2000 CI minutes per month unless you upgrade your Gitlab plan.

In comparison, private runners are setup using your own resources. After you setup your private runner, you have to register it to Gitlab in order to have Gitlab to use it.

How Gitlab pipelines work

When you check in files into your Gitlab repository, Gitlab will check for the existence of a file called ".gitlab-cl.yml". This file must be named exactly as typed (it is case-sensitive). The existence of this file tells Gitlab that there are tasks to be done. This file will list out the "jobs" for Gitlab to carry out.

Side note: As can be guessed from the file extension ".yml", this is a YAML (YAML Ain’t Markup Language) file. For details for the syntax of YAML, see http://www.yaml.org/

Sample .gitlab-ci.yml

As mentioned in the summary of this blog post, we want to setup a Gitlab pipeline that deploy to our SFTP server whenever we checked in a file. As such the below is the ".gitlab-ci.yml" file that would allow us to do that.

image: alpine

– apk update
– apk add openssh sshpass lftp

stage: deploy
– ls -al
– mkdir .public
– cp -r * .public
– echo "pwd" | sshpass -p $SFTP_PASSWORD sftp -o StrictHostKeyChecking=no zhixian@servername.somedomain.com
– lftp -e "mirror -R .public/ /test" -u zhixian,$SFTP_PASSWORD sftp://servername.somedomain.com
– .public
– master

The following is what what each of lines do:

Line 1: Declare that "jobs" will be executed in a Docker container that use the image "alpine". The "alpine" image used here is one of the lightest Linux container, Alpine Linux. You can use other images as long as that image is in Docker store.

Line 3: The "before_script" section. Declare the actions to be carried before any jobs are executed in this section.

Line 4: Update the Alpine Linux software package manager, "apk". By default, "apk" is empty. So we need to populate it with the software catalog.

Line 5: Install the "openssh", "sshpass" and "lftp" software packages.

Line 7: Our declaration of a job call "deploy_pages"

Line 8: Indicate that this job is only to be execute in the "deploy" stage.

Quick concept of "stage": Basically, a job are executed in different stages in the order of "build", "test", and "deploy". Jobs in the same stage are executed concurrently (assuming there are sufficient runners to execute the jobs).

Line 9: The "script" section. Actions to be carried for the job are specify under here.

Line 10: List files in the docker container entry point. By default, Gitlab will dump a copy of your code repository at the container entry point. I like to see a list of the files. This is otherwise a frivolous step that is not need.

Lines 11 and 12: Make a directory call ".public" (note the period in front of "public") and copy all files at the entry point into this directory.

ZX: This step is for facilitating lftp at step 14. The problem is that Gitlab will dump a copy of the git repository at the entry point as well. But we don’t want to accidentally deploy the git repository, hence the copying of files to a sub-directory.

Line 13: Start a SFTP session to "servername.somedomain.com" using the account name "zhixian" using password stored in secret variable "$SFTP_PASSWORD".
Execute a SFTP command "pwd" and terminate the SFTP session.

ZX: This step seems frivolous, but is essential to the success of this job.
As mentioned, jobs are executed in a Docker container environment.
Hence, if we initiate any form of connection to a new SSH-based environment, system will prompt us to accept the "fingerprint-key" for that new SSH-based environment.
This line creates SFTP connection and accepts "fingerprint-key" for the SSH-based environment without prompts.

ZX: Note the "$SFTP_PASSWORD". This is a secret variable set under your Gitlab repository "Settings" section, under "Pipelines" subsection.


If you scroll down, you will see a "Secret variables" section like the below. The password to the SFTP account is specified here.


Line 14: Executes the "lftp" command. Here, we use the "mirror" feature of lftp. This feature makes a replica of the file structure of the source to the destination.

ZX: Note the "sftp://" prefix in front of the server domain name ("servername.somedomain.com"). It is important to include this to establish SFTP connectivity. If this is not specified, lftp will assume normal FTP.

Line 15: Specify the "artifacts" section. Items listed under the "artifacts" section will be available for download after the job is completed.

Line 16: Specify the "paths" section for the artifacts.

Line 17: Specify that ".public" folder is to be treated as a an artifact made available for download.

Line 18: Specify the branch of code that will cause this job would be executed.

Line 19: Specify the this job is to be executed only when someone checked-in to the "master" branch.

That’s basically all that is needed to get Gitlab to send files to your SFTP server.


Configuration of your jobs with .gitlab-ci.yml (https://docs.gitlab.com/ee/ci/yaml/)


Cannot pull images from docker.io

Filed under: docker — Tags: , , , — Zhixian @ 18:14:09 pm


  1. You are unable to download docker images from the repository.
  2. You received a network timed out error message.
  3. This issue is probably due to your Docker DNS Server setting. Switch it from Automatic to Fixed to resolve issue.


If you just installed docker in Windows (in my case, it is Windows 10 Pro), you may encounter the following error message when trying to pull a docker image from docker.io:

C:\VMs\Docker>docker pull hello-world
Using default tag: latest
Pulling repository docker.io/library/hello-world
Network timed out while trying to connect to https://index.docker.io/v1/repositories/library/hello-world/images. You may want to check your internet connection or if you are behind a proxy.


However, when you open up your browser to navigate to the url (https://index.docker.io/v1/repositories/library/hello-world/images) of the image, you found that you have no problems.


This maybe due to an issue with the Network settings of Docker.
Specifically, the problem maybe with the DNS Server setting.
The DNS Server is set to Automatic by default and that DNS server may not be able to find the docker image repository.


To resolve this issue, simply set the DNS Server setting to “Fixed”.
For the IP address of the DNS Server, you can probably accept the default of “” (which points Google’s DNS server)
After clicking on the “Fixed” radio button, click on the “Apply” button to apply your changes.
This will cause Docker to restart.


After Docker have restarted, you should find that you are able to pull docker images without any issues.


Older Posts »

Create a free website or blog at WordPress.com.