Friday, September 24, 2021

Gnome Terminal failed to run after upgrading default Python 3 interpreter to Python 3.7 in Ubuntu 18.04

Ubuntu 18.04 comes with Python 3.6 by default. You might be inclined to do a sid-by-side installation with Python 3.7 as explained at: https://blog.globaletraining.com/2020/06/09/how-to-upgrade-to-python-3-7-on-ubuntu-18-x/ Then you might try to switch the default Python3 interpreter from Python 3.6 to Python 3.7 by using $ sudo update-alternatives --config python3 . However, this could result in unwanted side effect which made Gnome Terminal failed to open/run. 

You can fix this issue by editing this file: /usr/bin/gnome-terminal. This file by default uses /usr/bin/python3 as the python interpreter. You need to change the gnome-terminal script to use /usr/bin/python3.6 as its python interpreter, as explained over at: https://askubuntu.com/questions/1132349/terminal-not-opening-up-after-upgrading-python-to-3-7, i.e. change the #!/usr/bin/python3 line to #!/usr/bin/python3.6 and you should be OK afterward.

Sunday, September 12, 2021

Installing TensorRT 7.2.1, cuDNN 8.0.4 and Cuda 11.0 update 1 in Ubuntu 18.04 (x86_64)

  1. Make sure you have up-to-date Nvidia display driver. You need v4.50.x or newer for Cuda 11 to work (I'm using v4.70.x). You can update it using standard Ubuntu repository or via Nvidia Ubuntu ppa if it's not yet up-to-date, as explained over at: https://linuxconfig.org/how-to-install-the-nvidia-drivers-on-ubuntu-18-04-bionic-beaver-linux. In my case, I'm using standard Ubuntu repository without any problem. Make sure to follow the driver installation steps correctly if your machine is using UEFI secure boot and Nvidia PPA driver. Especially in the step that asks for password; make sure you remember the password that you entered because you will be asked the same password once you restart your machine to complete the driver installation.   
  2. Follow Cuda 11 update 1 installation guide at: https://docs.nvidia.com/cuda/archive/11.0/cuda-installation-guide-linux/index.html. Note: You can install certain version of Cuda in Ubuntu 18.04 as follows (we are using Cuda 11.1 as an example here):
    wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/cuda-ubuntu1804.pin
    sudo mv cuda-ubuntu1804.pin /etc/apt/preferences.d/cuda-repository-pin-600
    wget https://developer.download.nvidia.com/compute/cuda/11.1.0/local_installers/cuda-repo-ubuntu1804-11-1-local_11.1.0-455.23.05-1_amd64.deb
    sudo dpkg -i cuda-repo-ubuntu1804-11-1-local_11.1.0-455.23.05-1_amd64.deb
    sudo apt-key add /var/cuda-repo-ubuntu1804-11-1-local/7fa2af80.pub
    sudo apt-get update
    sudo apt-get install cuda-11.1
    
    by using "cuda-11.1" parameter in the last command, that specific version of cuda will be installed instead of newest version.
  3. Follow cuDNN 8.0.4 installation guide at: https://docs.nvidia.com/deeplearning/cudnn/archives/cudnn-804/install-guide/index.html#installlinux-deb
  4. Follow TensorRT 7.2.1 installation guide at https://docs.nvidia.com/deeplearning/tensorrt/archives/tensorrt-721/install-guide/index.html#installing-debian up-to the "sudo apt-get update" step (before the "sudo apt-get install tensorrt" step). Then, disable nvidia compute deb sources (https://developer.download.nvidia.com/compute/*) from your ubuntu sources list file(s), except the ones in the tensorrt deb sources file. The tensorrt deb source file is usually named: /etc/apt/sources.list.d/nv-tensorrt-cudaXX.x*.list. You can disable the "nvidia compute" deb sources by commenting the line(s), i.e. adding a "#" (without the quotes) to the beginning of each line in the deb sources file (/etc/apt/sources.list.d/cuda.list, etc.) that points to https://developer.download.nvidia.com/compute/* URL.  Then, install tensorrt along with its components normally ("sudo apt-get install tensorrt", etc.). Once tensorrt installation finished, make sure to enable nvidia compute deb sources as before, i.e. by deleting the comments (#) that you add in the beginning of the line(s) of the deb sources file that points to https://developer.download.nvidia.com/compute/*. This TensorRT 7.2.1 installation fix is explained over at: https://github.com/NVIDIA/TensorRT/issues/792. This is the excerpt:    
    For those who tried this approach, and yet the problem didn't get solved, it seems like there are more than one place storing nvidia deb-src links (https://developer.download.nvidia.com/compute/*) and these links overshadowed the actual deb link of dependencies corresponding with your tensorrt version.
Just comment out these links in every possible place inside /etc/apt directory at your system (for instance: /etc/apt/sources.list , /etc/apt/sources.list.d/cuda.list , /etc/apt/sources.list.d/cuda_learn.list , /etc/apt/sources.list.d/nvidia-ml.list (except your nv-tensorrt deb-src link)) before running "apt install tensorrt" then everything works like a charm (uncomment these links after installation completes).

Bonus Fix 1: Python onnx library error

You might encounter onnx data type error shown below after your tensorrt is successfully installed in python:

TypeError: 1.0 has type numpy.float32, but expected one of: int, long, float

The error very probably caused by old protobuf version. You need to update it with pip3 via this command: pip3 install protobuf -U. For more details, see: https://github.com/onnx/onnx/issues/2534


Bonus Fix 2: Cuda 11.0 missing libcusolver.so.10

You might encounter this error in tensorflow 2.4+ application:

Could not load dynamic library 'libcusolver.so.10'

It says you have missing libcusolver.so.10.Well, actually the library is non-existent in Cuda 11 and the solution is to create symbolic link to libcusolver.so.11 which is named libcusolver.so.10 in your Cuda 11 installation. It should work, as explained over at: https://github.com/tensorflow/tensorflow/issues/45263. As for where you could find libcusolver.so.11, it depends on the Cuda 11.0 version installed in your machine. In Cuda 11.0 update 1, libcusolver.so.11 is located at /usr/local/cuda-11.4/lib64. The directory is referred to indirectly by /usr/local/cuda and /usr/local/cuda-11 symbolic links. It's a bit hairy and you need to create the libcusolver.10.so symbolic link at /usr/local/cuda-11.4/lib64 that points to libcusolver.so.11.

Sunday, September 5, 2021

Fixing Failure to Load Video in OpenCV-Python on Linux

The failure to load video in OpenCV-Python usually happened with older Linux distributions. In my case, it happened in Ubuntu 18.04. The failure in video loading can be detected in python code similar to this:

import cv2

..

cap = cv2.VideoCapture(video_file_path)

if cap.isOpened():

    print("Video file loaded successfully")

else:

    print("Failed to load video!") 

    return

..

The failure to load the video very possibly is caused by erratic OpenCV-Python installation. In my case it was caused by very old pip version. Old pip version will trigger unexpected problems, in my case the old pip version (pip version 9.x) forces OpenCV recompilation when installing OpenCV-python. The old pip version installed the result of the recompilation at the end of pip's execution. Due to erratic pip behaviour, the resulting opencv-python is not working properly. Anyway, as a sidenote: I'm using python 3 venv virtual environment when I encountered this problem. 

THE FIX:

The fix is quite simple:

  1. Uninstall your current (unworkable) OpenCV-python. In my case, simply invoking "pip unistall opencv-python" works. I carried-out this command inside the virtual environment that has the erratic OpenCV-python. 
  2. Upgrade your pip version into more up-to-date version with "pip install --upgrade pip" command. Don't be afraid to carry-out this step if you're using python virtual environment because it won't affect your machine in any way. 
  3. Re-install OpenCV-python with pip with the following command: "pip install opencv-python" or if you want to specify the specific OpenCV-python version you can do that as well, as in (installing opencv-python version 4.3.0.38): "pip install opencv-python==4.3.0.38" 
This isssue is explained in more detail here: https://pypi.org/project/opencv-python/ (point 2 in the Installation section). It took me an hour to figure out the problem because I've never encountered the issue before. Hopefully this helps you out if you have similar problem.

Fixing "RTL8822CE wireless adapter doesn't work" on Ubuntu 18.04

 RTL8822CE wifi adapter doesn't work out of the box in Ubuntu 18.04 due to missing firmware, not because the driver is not included in Ubuntu 18.04. The fix requires you to be able to download the required firmware from kernel.org and copying the firmware to your Ubuntu 18.04 firmware directory.  To do that, first, you have to make sure the machine have an internet connection before you do the steps below, either via the machine ethernet connector or some other means, in my case I have a spare USB WiFi adapter that works out of the box in Ubuntu 18.04 to do this. Then, configure the Wifi firmware like this:

sudo apt install git
git clone git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
sudo cp linux-firmware/rtw88/rtw8822c_wow_fw.bin /lib/firmware/rtw88 
Another approach would be to copy the file manually via USB thumbdrive, if you can't connect the connect the machine to the internet via some other means. 
This fix is elaborated over at: https://askubuntu.com/questions/1309905/rtl8822ce-wireless-adapter-doesnt-work-on-ubuntu-18-04 
 

Fixing "ImportError: libcublas.so.10.0: cannot open shared object file: No such file or directory" in Ubuntu 18.04

 If you're using Nvidia CUDA 10.1.243 in your software development, either directly or indirectly and encountered: "ImportError: libcublas.so.10.0: cannot open shared object file: No such file or directory", it's caused by incorrect location of the said library. Nvidia have moved the location of the library form cuda-10.1 direcctory to cuda-10.2 directory. I know, this is rather insane. But it is what it is in Ubuntu 18.04, if you installed Cuda from Nvidia's *.deb package. 

The FIX: 

Add the path to cuda-10.2 library into your LD_LIBRARY_PATH, akin to this explanation in stackoverflow: https://stackoverflow.com/questions/55224016/importerror-libcublas-so-10-0-cannot-open-shared-object-file-no-such-file-or/64472380#64472380, i.e. add this line to your shell init file (*.profile or similar ones):

export LD_LIBRARY_PATH=/usr/local/cuda-10.2/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}} 
The line above assumes you've installed Cuda in its default installation location, i.e. /usr/local. You need to modify this value if you installed Cuda in non-default location.