Zhixian's Tech Blog

2018-06-15

How to fix “topmenu-gtk-module” error in Ubuntu 18.04 LTS

Filed under: computing, ubuntu — Tags: , , , — Zhixian @ 10:09:06 am

Overview

This blog post is a quick note to myself explaining how I fix the ‘Failed to load module “topmenu-gtk-module’ error message from displaying.

Scenario

Sometimes when launching an application in Linux you may come across an error message that reads:

Gtk-Message: 09:24:00.567: Failed to load module “topmenu-gtk-module”

topmenu-gtk-module

You are most likely to see this error when you try to launch a desktop application from the command-line.

This error message appears because your operating system is probably missing required packages, specifically “topmenu-gtk3” or “topmenu-gtk2”.

However if you are on Ubuntu 18.04 LTS, you will find that you could not install these packages using “apt-get” command-line tool, simply because they are not available. The latest of these packages are only available on Xenial or Artful. 😦

unable-to-locate-package

During the upgrade from Xenial to Bionic, the installation process disables all other PPAs.
Here are a few examples:

disabled-ppas

While its possible to fix this issue by downloading and compiling the source files for these packages, being of a lazy nature I decide against to do that. Instead what I chose to do is intentionally add the Xenial package repository back into my list of “Software & Updates”.

deb http://sg.archive.ubuntu.com/ubuntu/ubuntu xenial main universe

include-xenial

After I added that back in, it should prompt you to update your list of packages.
If it did not, run:

sudo apt-get update

After the command-line have finish running, you can install “topmenu-gtk2” and/or “topmenu-gtk3”.

sudo apt-get install topmenu-gtk3

sudo apt-get install topmenu-gtk2

I tried install “topmenu-gtk3” first. But that did not get rid of the message. So I went on to install “topmenu-gtk2”.

After the packages finished installing, you should not see the error message when you run your desktop application from the command-line.

Advertisements

2018-05-27

How to update your Kindle Touch firmware manually

This is a blog post describing how to update your Kindle Touch firmware manully.

The firmware is the software that runs on your Kindle Touch device. You might have to update the firmware if you did a factory reset and then found that you could not register your Kindle Touch any more. This is probably due to the system that was handling the registration of the Kindle Touch device is no longer available. You probably would see a screen like the below:

unable-to-connect

The instructions to transfer and install the software updates can be found here (as of 2018-05-27). The rest of this blog post is simply a more descriptive version of the instructions stated.

First, I note the version of the firmware that my Kindle Touch is using.
Assuming you are on the home page, this is done by clicking the menu button and select “Settings”.

go-to-settings

On the Settings page, click on menu button and select “Device Info”.

go-to-device-info

A “Device Info” dialog would popup. You can see the version of the firmware that your Kindle Touch is using on the 2nd last line of the dialog content. As stated in my screendump below, my Kindle Touch device is currently using 5.3.7.2. The latest version of the software as stated on the software updates download site here is 5.3.7.3 (as of 2018-05-27).

device-info

To update the firmware, go to thesoftware updates download site and download the firmware to your computer. This is done by going the web page and clicking the “Software Update 5.3.7.3” link as shown below:

download-updates

After you clicked the link, you should received a file named “update_kindle_5.3.7.3.bin”.
After the file is downloaded to your computer, connect your Kindle Touch to your PC.

After your PC has detected your Kindle Touch device, you should be able to open it using your file manager. Copy the the firmware update file to the root folder of the Kindle Touch device as follow:

transfer-update

After the file is copied, eject your Kindle Touch device from your computer.
You are now ready to apply the update to your Kindle Touch.
To apply the update, go to the Settings page as stated above when we wanted to check for the firmware version. Then on the Settings page, click on the menu button and select “Update Your Kindle” button.

 

update-kindle-button

A “Update Your Kindle” dialog would popup. Click on the “OK” button on the dialog to proceed with the update.

update-your-kindle2

After you clicked the “OK” button, the device will restart to apply the update.
You may see the following screen.

updating-kindle

Gradually, the update process will end.
And you will see that your Kindle Touch is updated to the version of the firmware update that you downloaded.

If you proceed to register your Kindle Touch, you should be successful this time round.

Hope this helps.

2017-07-12

Using ACMESharp to get SSL certificates from Let’s Encrypt

This blog post is a reminder note to myself on how to use the ACMESharp PowerShell module to get SSL certificates from Let’s Encrypt CA.

Essentially, the usage can be divided into the following phases:

  1. Install ACMESharp PowerShell module
  2. Import ACMESharp PowerShell module
  3. Initial (one-time) setup
  4. Register DNS of certificate
  5. Get “challenge” details (to prove that you are the owner of the domain)
  6. Signal Let’s Encrypt to confirm your challenge answer
  7. Download certificates

Steps 1-3 is only for setting up on a new PC.
Step 2, 4 should be repeated for each domain that you want SSL certificates for.
Steps 2, 5-7 should be repeated whenever you want to get or renew certificate.

1. Install ACMESharp PowerShell module

Install-Module -Name ACMESharp -AllowClobber

2. Import ACMESharp PowerShell module

Import-Module ACMESharp

 

3. Initial (one-time) setup

Initialize-ACMEVault

New-ACMERegistration -Contacts mailto:zhixian@hotmail.com -AcceptTos

4.  Register DNS of certificate

New-ACMEIdentifier -Dns plato.emptool.com -Alias plato_dns

5. Get challenge (to prove that you are the owner of the domain)

Complete-ACMEChallenge plato_dns -ChallengeType http-01 -Handler manual

6. Signal Let’s Encrypt to confirm your challenge answer

Submit-ACMEChallenge plato_dns -ChallengeType http-01
(Update-ACMEIdentifier plato_dns -ChallengeType http-01).Challenges | Where-Object {$_.Type -eq “http-01”}
New-ACMECertificate plato_dns -Generate -Alias plato_cert1
Submit-ACMECertificate plato_cert1
Update-ACMECertificate plato_cert1

7. Download certificates

NGINX

Get-ACMECertificate plato_cert1 -ExportCertificatePEM “C:\src\certs\plato_cert1.crt.pem”
Get-ACMECertificate plato_cert1 -ExportIssuerPEM “C:\src\certs\plato_cert1-issuer.crt.pem”

Add-Content -Value (Get-Content plato_cert1.crt.pem) -Path nginx.plato.emptool.com.pem
Add-Content -Value (Get-Content plato_cert1-issuer.crt.pem) -Path nginx.plato.emptool.com.pem

HAPROXY

ZX: Generating SSL certificates for HAPROXY is similar to NGINX, except it includes a key.

Get-ACMECertificate plato_cert1 -ExportKeyPEM “C:\src\certs\plato_cert1.key.pem”
Get-ACMECertificate plato_cert1 -ExportCertificatePEM “C:\src\certs\plato_cert1.crt.pem”
Get-ACMECertificate plato_cert1 -ExportIssuerPEM “C:\src\certs\plato_cert1-issuer.crt.pem”

Add-Content -Value (Get-Content plato_cert1.crt.pem) -Path haproxy.plato.emptool.com.pem
Add-Content -Value (Get-Content plato_cert1-issuer.crt.pem) -Path haproxy.plato.emptool.com.pem
Add-Content -Value (Get-Content plato_cert1.key.pem) -Path haproxy.plato.emptool.com.pem

 

IIS

Get-ACMECertificate plato_cert1 -ExportPkcs12 “C:\src\certs\iis.plato_cert1.pfx”

 

2017-07-09

How to deploy files to Windows using SFTP via Gitlab pipelines

Summary

This blog post describes how you would deploy files to a Windows Server via SFTP using Gitlab pipelines using shared runners.

The practical uptake for this is that you can deploy files for your website to be served by Internet Information Services (IIS) server using Gitlab pipelines.

Note: The context of this post is about deploying websites but the steps described can be used for deploying any type of file using Gitlab pipelines.

Contents

  1. Assumptions
  2. What are Gitlab pipelines
  3. How Gitlab pipelines work
  4. Sample .gitlab-ci.yml

Assumptions

  1. You have an working Gitlab account.
  2. You have a working Gitlab repository.
  3. You have a Windows Server
  4. You have a SFTP server running on your Windows Server and you have a working SFTP account for that server.

If you do not have a SFTP server, you can consider SFTP/SCP Server from SolarWinds.
Its not a fantastic product but it would have to do (considering that it is a free product)
The software is available at the following url after registration:
http://www.solarwinds.com/free-tools/free-sftp-server/registration

What are Gitlab pipelines

To put it simply, pipelines is Gitlab’s mechanism to perform tasks specified by you when you check-in files into your Gitlab repository. These tasks are executed by processes (dubbed "runners" in Gitlab terminology).

The runners can be grouped in shared and private (non-shared) runners.

Shared runners are hosted by Gitlab to be used by all users of Gitlab that wishes to use them). They are free to use but are limited to 2000 CI minutes per month unless you upgrade your Gitlab plan.

In comparison, private runners are setup using your own resources. After you setup your private runner, you have to register it to Gitlab in order to have Gitlab to use it.

How Gitlab pipelines work

When you check in files into your Gitlab repository, Gitlab will check for the existence of a file called ".gitlab-cl.yml". This file must be named exactly as typed (it is case-sensitive). The existence of this file tells Gitlab that there are tasks to be done. This file will list out the "jobs" for Gitlab to carry out.

Side note: As can be guessed from the file extension ".yml", this is a YAML (YAML Ain’t Markup Language) file. For details for the syntax of YAML, see http://www.yaml.org/

Sample .gitlab-ci.yml

As mentioned in the summary of this blog post, we want to setup a Gitlab pipeline that deploy to our SFTP server whenever we checked in a file. As such the below is the ".gitlab-ci.yml" file that would allow us to do that.

image: alpine

before_script:
– apk update
– apk add openssh sshpass lftp

deploy_pages:
stage: deploy
script:
– ls -al
– mkdir .public
– cp -r * .public
– echo "pwd" | sshpass -p $SFTP_PASSWORD sftp -o StrictHostKeyChecking=no zhixian@servername.somedomain.com
– lftp -e "mirror -R .public/ /test" -u zhixian,$SFTP_PASSWORD sftp://servername.somedomain.com
artifacts:
paths:
– .public
only:
– master

The following is what what each of lines do:

Line 1: Declare that "jobs" will be executed in a Docker container that use the image "alpine". The "alpine" image used here is one of the lightest Linux container, Alpine Linux. You can use other images as long as that image is in Docker store.

Line 3: The "before_script" section. Declare the actions to be carried before any jobs are executed in this section.

Line 4: Update the Alpine Linux software package manager, "apk". By default, "apk" is empty. So we need to populate it with the software catalog.

Line 5: Install the "openssh", "sshpass" and "lftp" software packages.

Line 7: Our declaration of a job call "deploy_pages"

Line 8: Indicate that this job is only to be execute in the "deploy" stage.

Quick concept of "stage": Basically, a job are executed in different stages in the order of "build", "test", and "deploy". Jobs in the same stage are executed concurrently (assuming there are sufficient runners to execute the jobs).

Line 9: The "script" section. Actions to be carried for the job are specify under here.

Line 10: List files in the docker container entry point. By default, Gitlab will dump a copy of your code repository at the container entry point. I like to see a list of the files. This is otherwise a frivolous step that is not need.

Lines 11 and 12: Make a directory call ".public" (note the period in front of "public") and copy all files at the entry point into this directory.

ZX: This step is for facilitating lftp at step 14. The problem is that Gitlab will dump a copy of the git repository at the entry point as well. But we don’t want to accidentally deploy the git repository, hence the copying of files to a sub-directory.

Line 13: Start a SFTP session to "servername.somedomain.com" using the account name "zhixian" using password stored in secret variable "$SFTP_PASSWORD".
Execute a SFTP command "pwd" and terminate the SFTP session.

ZX: This step seems frivolous, but is essential to the success of this job.
As mentioned, jobs are executed in a Docker container environment.
Hence, if we initiate any form of connection to a new SSH-based environment, system will prompt us to accept the "fingerprint-key" for that new SSH-based environment.
This line creates SFTP connection and accepts "fingerprint-key" for the SSH-based environment without prompts.

ZX: Note the "$SFTP_PASSWORD". This is a secret variable set under your Gitlab repository "Settings" section, under "Pipelines" subsection.

2017-07-09_001326

If you scroll down, you will see a "Secret variables" section like the below. The password to the SFTP account is specified here.

2017-07-09_001418

Line 14: Executes the "lftp" command. Here, we use the "mirror" feature of lftp. This feature makes a replica of the file structure of the source to the destination.

ZX: Note the "sftp://" prefix in front of the server domain name ("servername.somedomain.com"). It is important to include this to establish SFTP connectivity. If this is not specified, lftp will assume normal FTP.

Line 15: Specify the "artifacts" section. Items listed under the "artifacts" section will be available for download after the job is completed.

Line 16: Specify the "paths" section for the artifacts.

Line 17: Specify that ".public" folder is to be treated as a an artifact made available for download.

Line 18: Specify the branch of code that will cause this job would be executed.

Line 19: Specify the this job is to be executed only when someone checked-in to the "master" branch.

That’s basically all that is needed to get Gitlab to send files to your SFTP server.

References

Configuration of your jobs with .gitlab-ci.yml (https://docs.gitlab.com/ee/ci/yaml/)

2016-09-12

Cannot pull images from docker.io

Filed under: docker — Tags: , , , — Zhixian @ 18:14:09 pm

Summary

  1. You are unable to download docker images from the repository.
  2. You received a network timed out error message.
  3. This issue is probably due to your Docker DNS Server setting. Switch it from Automatic to Fixed to resolve issue.

Details

If you just installed docker in Windows (in my case, it is Windows 10 Pro), you may encounter the following error message when trying to pull a docker image from docker.io:

C:\VMs\Docker>docker pull hello-world
Using default tag: latest
Pulling repository docker.io/library/hello-world
Network timed out while trying to connect to https://index.docker.io/v1/repositories/library/hello-world/images. You may want to check your internet connection or if you are behind a proxy.

image

However, when you open up your browser to navigate to the url (https://index.docker.io/v1/repositories/library/hello-world/images) of the image, you found that you have no problems.

image

This maybe due to an issue with the Network settings of Docker.
Specifically, the problem maybe with the DNS Server setting.
The DNS Server is set to Automatic by default and that DNS server may not be able to find the docker image repository.

image

To resolve this issue, simply set the DNS Server setting to “Fixed”.
For the IP address of the DNS Server, you can probably accept the default of “8.8.8.8” (which points Google’s DNS server)
After clicking on the “Fixed” radio button, click on the “Apply” button to apply your changes.
This will cause Docker to restart.

image

After Docker have restarted, you should find that you are able to pull docker images without any issues.

image

2016-01-04

Fixing “The Parallel port driver service failed to start” on Windows 2003

Filed under: computing, windows — Tags: , — Zhixian @ 19:42:01 pm

My first blog post for 2016.
This is a reminder blog post.

Summary

  1. Symptoms
  2. Solution
  3. Reference

Symptoms

When your Windows 2003 boot up, you may see another a message like the below:

VirtualBox_Win2k3-ZXDBM_04_01_2016_19_14_09

When you logged into Windows and examine the Event Viewer, you may see an error under System.

VirtualBox_Win2k3-ZXDBM_04_01_2016_19_21_43

When you open up the error, you would see, the following error message:

VirtualBox_Win2k3-ZXDBM_04_01_2016_19_23_32

 

Solution

Start a Windows command prompt and run the following command:

sc config parport start= disabled

Note the space after “start=” in the above command. It is required.

After you ran this command, you should not see the error message prompt on your next Windows bootup.
Note: This solution deviate from the one stated in the reference.

 

Reference

  1. Error message on a Windows Vista-based or Windows Server 2008-based computer that does not have a parallel port: "The Parallel port driver service failed to start"

2015-11-02

Minix3 Basic Software Sets

Filed under: computing, minix3 — Zhixian @ 17:49:11 pm

This is done by executing the following commands at the command line:

# pkgin update
# pkgin_sets

When executing pkgin_sets, it will show the following screen and prompt you to install each set one by one.

Zhixian’s note: The software installed can be found in /usr/pkg/bin (or /usr/pkg/sbin for system executables).

First prompt installs:

  1. openssh
  2. vim (exception from the above note; executable is found at /usr/bin/vi)
  3. curl

Second prompt install:

  1. git-base
  2. bmake
  3. gmake
  4. binutils
  5. clang

Third prompt installs:

  1. bison
  2. groff
  3. perl
  4. python (the executable for python is named “python2.7” instead of “python” as found in other installations.)

First Prompt:image

 

Second Prompt:image

 

Third Prompt:
image

For some reason the tiff library is missing from repository.
image

 

Installation complete:image

 

Searching and installing the tiff library that got missed out earlier.

image

Minix3 Basic Post Installation Setup

Filed under: Uncategorized — Zhixian @ 17:00:11 pm

Summary

  1. Set password for root account
  2. Set timezone
  3. Set hostname

 

Set Password for root account

By default, there is no password assigned for the root account.
You can set a password for root account using the following command:

# passwd

image

Set timezone

Before you can set the timezone, you need to identify it using the computer standards.
Look at a file /usr/share/zoneinfo for a list of timezone.

image

In my case, my country Singapore is located in Asia

image

So my timezone is determined to be Asia/Singapore.
To set the timezone, enter the follow command:

# echo export TZ=Asia/Singapore > /etc/rc.timezone

image

This will take effect when I log in again.

 

Set hostname

The default hostname can be rather non-descriptive.

image

So you might want to change that.
So for example, I chose the hostname of rocket, I would enter the following command to set it:

# hostname rocket

image

Zhixian’s note: This change does not seem to persist after a reboot. Sad smile
Still exploring how to do this correctly.
The best solution I saw so far is from http://osdir.com/ml/minix3/2011-12/msg00072.html
It suggests putting your IP and host name into /etc/hosts file.
Here’s the excerpt pertaining to problem and suggested solution:image

In case you are wondering, here’s what my /etc/hosts file look like:

image

Reference

http://wiki.minix3.org/doku.php?id=usersguide:postinstallation

MINIX3 Installation

Filed under: computing, minix3 — Tags: , , — Zhixian @ 16:32:11 pm

A list of screen dumps that I taken while installing Minix3 on VirtualBox.
Dumping the screens first. I intend to annotate them at a later date.

 

image

 

image

 

image

 

image

 

image

 

image

 

image

 

 

image

 

 

image

 

image

 

image

 

image

 

image

 

image

 

image

 

image

 

End of installation

2015-10-30

Installing Alpine

Filed under: Uncategorized — Zhixian @ 23:08:10 pm

I do not find the default “mail” program in Linux is not user-friendly.

Hence, I would suggest another console e-mail application.
Two applications come to mind, pine and elm.
I’m picking pine as I have slightly more familiarity with it and I think it is more user-friendly than elm.

Pine is a proprietary application owned by University of Washington. So I am picking to install the open-source version of this application call Alpine. To install Alpine type the following command:

$ sudo apt-get install alpine

After the installation has complete, type the following command to run it:

$ alpine
Older Posts »

Create a free website or blog at WordPress.com.