Wednesday, December 30, 2015

Building 64-bit AIX Executables with Autotools


Creating 64-bit AIX executable is not quite as straight forward as in other platforms if you are using Autotools' Libtool. Actually, even if you don't use Autotools' Libtool, it would still be not really as straightforward as in other platform supported by GNU Autotools. I'm not going to present an example code that runs on AIX as 64-bit executable in this post. However, I'll show you how to build such program with the correct build script, with the assumption that you are using GNU Autotools.
Now, you've got the basics. Let's proceed to the real stuff. Below is the abridged version of the build script I use to build the debug version of my autotools project in AIX:
#!/bin/sh

## NOTE:
## -----
## - Passing native linker flag via LDFLAGS worked as documented in libtool (as shown below). 
##
## - For more info on native AIX ar archiver, see: 
##   https://www-01.ibm.com/support/knowledgecenter/ssw_aix_53/com.ibm.aix.cmds/doc/aixcmds1/ar.htm?lang=en 
##
## - For more info on native AIX linker, see: 
##   https://www-01.ibm.com/support/knowledgecenter/ssw_aix_53/com.ibm.aix.cmds/doc/aixcmds3/ld.htm?lang=en 
##

case "$1" in 

      AIX) ./configure CFLAGS="-DDEBUG -fstrict-aliasing -Wstrict-aliasing=2 -g -O0 -maix64" \
    LDFLAGS="-Wl,-b64" AR='ar -X32_64' && make V=1
    ;;
      ###... 
esac

Mind you that I'm using libtool in the autotools project that uses the build script above. As you see, you need to pass the correct flags to the compiler (I'm using GCC on AIX), the linker and tools invoked by libtool (if you're using libtool). The -Wl flag is a libtool flag used for passing flags directly to the underlying linker. In this particular case, the linker is AIX native linker (see: AIX ld Command). The next thing to pay attention to is the ar archiver is AIX native archiver. This archiver only supports 32-bit object file by default. Therefore, you must explicitly override ar setting to force it to switch to support 64-bit object code, as shown above. See AIX ar command for more details on the archiver.

The release version of the build script is not that different from the debug version. I think you could figure it out already.

Hopefully this helps those working with Autotools projects in AIX.

Friday, December 25, 2015

The Importance of Reading Autoconf and Automake Manuals

The title of this post is possibly an understatement. However, time and again, silly questions and "stupid"/inappropriate way of using software tools can be prevented. This is especially applies to all GNU-related tools. If you've installed GNU tools such as Autotools, you must read its accompanying manual, preferably via:
$ info autoconf
$ info automake
Take your time to learn how to use keyboard navigation by pressing "?" key to enter the navigation help menu.

You'll appreciate the manuals better after seeing questions like this at Stackoverflow. Let me copy the relevant question here:
I've been looking for this for a while: I'm currently converting a medium-size program to autotools, coming from an eclipse-based method (with makefiles)
I'm always used to having a "debug" build, with all debug symbols and no optimizations, and a "release" build, without debug symbols and best optimizations.
Now I'm trying to replicate this in some way with autotools, so I can (perhaps) do something like:
./configure
make debug
Which would have all debug symbols and no optimizations, and where:
./configure
make
Would result in the "release" version (default)
PS: I've read about the --enable-debug flag/feature, ...
Well, this question is kind of "stupid" if you have read Automake manual. The manual explains the answer to such question in detail in section 2.2.6 Parallel Build Trees (a.k.a VPATH Builds), thorough an example. The title of section 2 of the manual is An Introduction to the Autotools. I think now you know where I'm getting at without going further.  I have to give kudos to William Purcell for his answers in that Stackoverflow thread.

Now, the unfortunate fact that many developers failed to see the value of reading the manual. The prove is everywhere, if you found a package that uses Autotools and provide a "./configure --enable-debug" flag which changes the compiler flags such as the optimization flag, then it's a sign of trouble because the "maintainer" of the code certainly doesn't follow GNU autotools philosophy in creating his package and make other people life harder. FYI, in GNU software terminology, the maintainer is one who create the software package, while user is one who compile and install the package.

Well, that's it for reminder to RTFM.

Friday, November 27, 2015

How to read Complicated C Language Declaration

There is a simple way to decode complicated C language declaration called The "Clockwise/Spiral Rule". The technique basically starts in the unknown variable/function and move "right" spiral (outward) to decode the variable/function exact type. The details are explained at http://c-faq.com/decl/spiral.anderson.html. It's the simples rule I've ever encountered in reading complicated C language declaration.

Saturday, November 21, 2015

Autotools Conditional Makefile Creation via AM_COND_IF

There are times when you need to generate several Makefiles in one platform but want to prevent generating the same Makefile in another platform, or to generate different Makefile for the latter platform. This is where AM_COND_IF (http://www.gnu.org/s/automake/manual/html_node/Usage-of-Conditionals.html) comes to the rescue.

As cool AM_COND_IF sounds, it takes a bit of exercise to make it work as you intend due to lack of documentation. At least for those not savvy enough with m4 macro language. Now, let's get down to business. These are the rules:
  • AM_COND_IF cannot be invoked twice with the same Makefile output (assuming you're using AC_CONFIG_FILES with AM_COND_IF).
  • You need to create automake conditionals first before using AM_COND_IF
Now, let's look at a sample configure.ac that uses AM_COND_IF.
# Platform specific checks
libevent_test_on_linux="no"

# For host type checks
AC_CANONICAL_HOST
# OS-specific tests
case "${host_os}" in
    *linux*) 
    # Define we are on Linux
    AC_DEFINE(HAVE_LINUX, 1, [Current OS is Linux]) 
       libevent_test_on_linux="yes"
    ;;
esac

AM_CONDITIONAL(ON_LINUX, test "x$libevent_test_on_linux" = "xyes")   

# Generate Makefile based on current OS
AC_CONFIG_FILES([Makefile
                 lib1/Makefile
                 lib2/Makefile
                 experiment_2/Makefile])

AM_COND_IF([ON_LINUX], 
           [AC_CONFIG_FILES([linux_specific_lib/Makefile])])

As you can see, the first invocation of AC_CONFIG_FILES instructs automake to generate Makefile for used by all build platforms. The second invocation of AC_CONFIG_FILES (inside AM_COND_IF), only generate Makefile if the target operating system is Linux. If, for example, you want to support other operating system via a different set of OS-specific Makefiles, you can just copy the Linux implementation and add it to configure.ac, modify the Linux implementation to suit your need.

That's it. Hopefully, this helps those playing around with using AM_COND_IF. The key takeaway is: never ever call AC_CONFIG_FILES with the same target Makefile output twice! Even from inside AM_COND_IF. Autotools will complain if you do so and you won't be able to generate the Makefile via autoreconf. You must invent a way to make AC_CONFIG_FILES conform to this rule.

Friday, November 20, 2015

Openssh Hiccups and Fix on IBM System i (AS/400)

Let me start with the symptoms:
  • Logging in to the AS/400 (PASE) via openssh always failed despite the username, password and all directory/file permission has been triple-checked and confirmed to be OK.
  • From SSH log, it seems that the login is successful but the connection immediately "kicked out" for some reason. 
This is how I fix this problem:
  1. Run ssh client with most verbose flag.
  2. Run the shell (default shell) invoked by sshd in the server upon login.
  3. Look for clues from 1 and 2.
  4. Fix the problem based on the clue.
Running ssh -vvv in the ssh client machine produces this log:
debug3: no such identity: /home/pinczakko/.ssh/id_ecdsa: No such file or directory
debug1: Trying private key: /home/pinczakko/.ssh/id_ed25519
debug3: no such identity: /home/pinczakko/.ssh/id_ed25519: No such file or directory
debug2: we did not send a packet, disable method
debug3: authmethod_lookup keyboard-interactive
debug3: remaining preferred: password
debug3: authmethod_is_enabled keyboard-interactive
debug1: Next authentication method: keyboard-interactive
debug2: userauth_kbdint
debug2: we sent a keyboard-interactive packet, wait for reply
debug1: Authentications that can continue: publickey,password,keyboard-interactive
debug3: userauth_kbdint: disable: no info_req_seen
debug2: we did not send a packet, disable method
debug3: authmethod_lookup password
debug3: remaining preferred: 
debug3: authmethod_is_enabled password
debug1: Next authentication method: password
xxx@10.10.10.10's password: 
debug2: we sent a password packet, wait for reply
debug1: Authentication succeeded (password).
Authenticated to 10.10.10.10 ([10.10.10.10]:22).
debug1: channel 0: new [client-session]
debug3: ssh_session2_open: channel_new: 0
debug2: channel 0: send open
debug1: Entering interactive session.
debug2: callback start
debug2: fd 3 setting TCP_NODELAY
debug3: ssh_packet_set_tos: set IP_TOS 0x08
debug2: client_session2_setup: id 0
debug2: channel 0: request shell confirm 1
debug2: callback done
debug2: channel 0: open confirm rwindow 0 rmax 32768
debug2: channel 0: rcvd adjust 2097152
debug2: channel_input_status_confirm: type 99 id 0
debug2: shell request accepted on channel 0
debug1: client_input_channel_req: channel 0 rtype exit-signal reply 0
debug2: channel 0: rcvd eof
debug2: channel 0: output open -> drain
debug2: channel 0: obuf empty
debug2: channel 0: close_write
debug2: channel 0: output drain -> closed
debug2: channel 0: rcvd close
debug2: channel 0: close_read
debug2: channel 0: input open -> closed
debug3: channel 0: will not send data after close
debug2: channel 0: almost dead
debug2: channel 0: gc: notify user
debug2: channel 0: gc: user detached
debug2: channel 0: send close
debug2: channel 0: is dead
debug2: channel 0: garbage collecting
debug1: channel 0: free: client-session, nchannels 1
debug3: channel 0: status: The following connections are open:
  #0 client-session (t4 r0 i3/0 o3/0 fd -1/-1 cc -1)

Transferred: sent 3328, received 2568 bytes, in 0.5 seconds
Bytes per second: sent 6346.9, received 4897.5
debug1: Exit status -1

As you can see in the log, nothing particularly revealing. You need to run it in real time to see the connection immediately disconnected (starting at debug1: client_input_channel_req: channel 0 rtype exit-signal reply 0). This makes it clear that the client has successfully authenticated but somehow the shell on the server "died" or some permission problem on the user default login directory or other kinds of permission problem. I have triple checked the permission on all related paths without clear leads.

The next step I did was to check which shell executable is the default shell, i.e. which shell is invoked by openssh when you finished logging in to the machine running openssh. In System i version 6.1 (its AS400 PASE), openssh configuration file is located in QOpenSys/QIBM/UserData/SC1/OpenSSH/openssh-3.8.1p1/etc/sshd_config. Unfortunately, there is no default shell variable set in there. However, I have other System i machine that runs openssh with all default settings just fine. Cross-checking the latter machine I found out the default shell is QOpenSys/usr/bin/bsh. Therefore, I need to check whether bsh is working fine in the problematic System i machine.

I found out the bsh executable in the problematic System i machine is somehow broken. When I run bsh from PASE shell via 5250 terminal (via CALL QP2TERM command), bsh immediately stopped. Basically, the shell said bsh is "killed". I tried other shell, i.e. csh (QOpenSys/usr/bin/csh) and this shell worked. Therefore, what I need to fix the problem is a way to force openssh to use csh as the login shell.

Now, we arrived to the FIX. Forcing openssh to use certain shell when logging in via openssh in System i (at least in version 6.1) can be done via the ibmpaseforishell "magic" keyword (see: http://www-01.ibm.com/support/docview.wss?uid=nas8N1011555). These are the steps:
  1. Login to the System i machine via 5250 terminal application. 
  2. Open QOpenSys/QIBM/UserData/SC1/OpenSSH/openssh-3.8.1p1/etc/sshd_config via EDTF. This is the command: EDTF 'QOpenSys/QIBM/UserData/SC1/OpenSSH/openssh-3.8.1p1/etc/sshd_config'
  3. Add the ibmpaseforishell "magic" keyword near the end of the configuration file. This is the result for me:
    #no default banner path 
    #Banner /some/path 
    
    #ibm pase for IBM i shell 
    ibmpaseforishell /QOpenSys/usr/bin/csh 
    
    #override default of no subsystems 
    Subsystem sftp /QOpenSys/QIBM/ProdData/SC1/OpenSSH/openssh-3.8.1p1/libexec/sftp-server 
    
    
    As you can see, I changed the default shell to csh via the ibmpaseforishell keyword.
  4. Check that ssh client can now connect to the ssh server (daemon) in AS400. When I carried out this step, I finally get a working shell, i.e. the csh shell. 
The key point here is you must run ssh in full verbose (via the -vvv switch) to help debug the connection problem. When I was sure that openssh is just fine, then I moved up the chain by checking the default shell. It turns out the default shell is the culprit.

NOTE:
----------
- It seems that not all System i machine supports ibmpaseforishell keyword. It seems that the machine has to have this PTF. However, the keyword works in the problematic System i machine that I worked with.

Thursday, November 5, 2015

Supporting Out-Of-Source-Code-Tree Build with Autotools

Some opensource code are not trivial to be build out of it's source (code) tree. This is especially true in some opensource libraries because they generate intermediate file(s) which must be handled accordingly. But, fear not, there are two Autoconf constructs (or rather internal variables) that can help you tame this wild library code. They are $(top_srcdir) and $(top_builddir). Refer to Autoconf Preset Output Variables for their details.

I'll take libevent as a real-world example because this library generates an intermediate header file at build time which must be included in the build process (event-config.h). This is where $(top_srcdir) and $(top_builddir) come into play. If you want to be able to build out-of-source-tree, you need to include this generated file into your application code that uses libevent. You use $(top_builddir) for that. In the meantime, you also need to include the "ordinary" include file in the source tree, and that's where $(top_srcdir) comes into play.

Let's assume, your source tree looks like below and your application links to this particular libevent version statically:
.
├── libevent-2.0.22-stable
│   ├── autom4te.cache
│   ├── compat
│   │   └── sys
│   ├── include
│   │   └── event2
│   ├── m4
│   ├── sample
│   ├── test
│   └── WIN32-Code
│       └── event2
└── your_application_code_dir

In your_application_code_dir, you need to have a Makefile.am file with the following contents:
### NOTE:
### $(top_builddir) is required for libevent because there is 
### an include file (event-config.h) that is generated at build-time.
### This file will be in the build directory instead of the source code 
### directory if you build out-of-tree.
###
AM_CPPFLAGS = -I$(top_srcdir)/libevent-2.0.22-stable/include \
       -I$(top_builddir)/libevent-2.0.22-stable/include
  

## Omitted for clarity .. 

bin_PROGRAMS = your_program_name

your_program_name_SOURCES = your_program_name.c
your_program_name_LDADD = $(top_builddir)/libevent-2.0.22-stable/libevent_core.la

## Omitted for clarity .. 
The code in Makefile.am above (placed in your_application_code_dir) should be enough to make it possible to build libevent out-of-source-tree. As you see, both the include file in the build directory (out of the source code tree) and the include file in the source code tree are included. This should make it less of hassle to keep your source code tree clean all the time. Especially if you are using RCS such as subversion, git or mercurial.

Hopefully this helps those who intend to always build autotools code out-of-(source)-tree.

Saturday, October 31, 2015

Fixing Tmux Mouse Issue

Checkout your tmux version if you've been experiencing mouse-related issue in tmux recently. This is the command:
me@machine $  tmux -V
That should show your tmux version. If you're using Tmux version 2.1, your old mouse configuration in .tmux.conf is no longer valid. The following shows the valid .tmux.conf configuration lines for mouse support in Tmux version 2.1:
#Mouse works as expected
set -g mouse on
#setw -g mode-mouse on #tmux version < 2.1
#set -g mouse-select-pane on  #tmux version < 2.1
#set -g mouse-resize-pane on  #tmux version < 2.1
#set -g mouse-select-window on  #tmux version < 2.1
The commented-out lines are from tmux version < 2.1. There is only one mouse setting in Tmux v2.1, that is "mouse". I got my tmux working as before after this change.

Note:
Primary source of information: [SOLVED] Tmux 2.1 new mouse config issues - can't scroll (It helps despite failing to show the use of the new option in tmux.conf).

Friday, October 30, 2015

Modifying Memcached Configuration in Arch Linux

It's not quite trivial to modify Memcached server configuration in Arch Linux because it's entirely managed via systemd, at least for those not well-versed with systemd. This is the command to modify Memcached server configuration in Arch Linux via systemd's systemctl:
root@darkstar # systemctl edit memcached.service --full
The command should spawn the default text editor configured to be used by systemctl. You can make your changes in the editor and save it. Systemd will apply your changes accordingly.

Wednesday, October 14, 2015

Building C++ Application with Boost Library and Autotools in Linux

In this post, I'm going to present the steps required to build C++ application which uses Boost library in Linux (x86_64) with the help of GNU autotools. Mind you that I'm using Arch Linux distribution. Therefore, please adjust the prerequisites according to your environment.

Prerequisites

  • The usual GCC tools, i.e. g++ compiler, ld linker, etc.
  • GNU autotools: autoconf, automake, etc.
  • autoconf-archive. This is a set of autoconf macros that will help building Boost applications. In Arch Linux, I used pacman to install it. You need to carry-out similar step in your Unix/Linux installation.

Initializing The Build System

I assume that the source code directory entries look like this:
.
├── configure.ac
├── m4
├── Makefile.am
└── src
    ├── main.cpp
    └── Makefile.am

m4 is an empty directory, configure.ac, Makefile.am, and src/Makefile.am are the boiler-plate code required to initialize the autotools build system. The next section shows you contents of the files. Use autoscan and autoreconf to initialize the autotools build system like so:
  1. Run autoscan (in the root source code directory) to create configure.scan template file. You need to copy/rename this file to configure.ac and edit it accordingly.
  2. Run autoreconf to create files required by GNU build tools (GNU make and gcc/g++). The following is how you would run autoreconf to create the files.
    $ autoreconf -fvi
    

Dealing with Boost Modules

Every boost module requires separate dependency library, autoconf macro (in configure.ac) and automake "library entry" (in Makefile.am). Now let's start with configure.ac. Contents of <ROOT_DIR>/configure.ac is as follows:
# configure.ac
#                                               -*- Autoconf -*-
# Process this file with autoconf to produce a configure script.

AC_PREREQ([2.69])
AC_INIT([boost_test], [0.0.1], [me@bug.com])
AC_CONFIG_SRCDIR([config.h.in])
AC_CONFIG_HEADERS([config.h])
AC_CONFIG_MACRO_DIR([m4])

AM_INIT_AUTOMAKE([foreign -Wall -Werror])
m4_ifdef([AM_SILENT_RULES],
    [AM_SILENT_RULES([yes])
])

# Checks for programs.
AC_PROG_CC
AC_PROG_CXX

# Checks for libraries.
AX_BOOST_BASE([1.48],, [AC_MSG_ERROR([This program needs Boost, but it was not found in your system])])
AX_BOOST_SYSTEM
AX_BOOST_DATE_TIME
AX_BOOST_THREAD

# Checks for header files.

# Checks for typedefs, structures, and compiler characteristics.

# Checks for library functions.

AC_CONFIG_FILES([Makefile
                 src/Makefile])
AC_OUTPUT

TODO: Explain the lines that have something to do with Boost (AX_BOOST_*) in configure.ac 

Contents of <ROOT_DIR>/Makefile.am is as follows:
ACLOCAL_AMFLAGS = -I m4
EXTRA_DIST = bootstrap
SUBDIRS = src
<ROOT_DIR>/src/Makefile.am is as follows:
bin_PROGRAMS = boost_test

AM_CPPFLAGS = $(BOOST_CPPFLAGS)
AM_LDFLAGS = $(BOOST_LDFLAGS)

boost_test_SOURCES = main.cpp
boost_test_LDADD = $(BOOST_SYSTEM_LIB) $(BOOST_THREAD_LIB) $(BOOST_DATE_TIME_LIB) 

TODO: Explain the lines that have something to do with Boost in src/Makefile.am -- $(BOOST_CPPFLAGS) $(BOOST_LDFLAGS) $(BOOST_*_LIB)

Contents of <ROOT_DIR>/src/main.cpp is as follows:
#include <iostream>
#include <boost/thread.hpp>
#include <boost/date_time.hpp>

void workerFunc()
{
 boost::posix_time::seconds workTime(3);

 std::cout << "Worker: running" << std::endl;

 // Pretend to do something useful...
 boost::this_thread::sleep(workTime);

 std::cout << "Worker: finished" << std::endl;
}

int main(int argc, char *argv[])
{
 std::cout << "main: startup" << std::endl;

 boost::thread workerThread(workerFunc);

 std::cout << "main: waiting for thread" << std::endl;

 workerThread.join();

 std::cout << "main: done" << std::endl;

 return 0;
}

Pay attention that each Boost module is represented by a single header file which also requires autoconf entry in configure.ac and corresponding library dependency in src/Makefile.am

END NOTE: This post is still incomplete. It's published solely for the benefit of those who can understand it quite well from the source code itself.

Monday, October 5, 2015

Developing C/C++ Software for IBM AIX (or AIX wannabe)

The following are important documentation for C/C++ developers working on IBM AIX platform. Despite IBM insistence to promote Linux in their product line, AIX and System i line of products still command the mindshare of Enterprise decision makers at the moment. So, for those (un)fortunate enough having to deal with AIX and use C/C++ environment, I found these "redbooks" to be indispensable:
  1. AIX PDFs, basically everything there is to know about IBM AIX. My favorite section is the Technical Reference section though.
  2. Developing and Porting C and C++ Applications on AIX. This one is what it says. DBX is surely explained in there too (for those AIX geeks out there).
Hopefully, this helps those playing around with IBM AIX at the moment. AIX is a little different than Linux or FreeBSD, but it is UNIX after all. Therefore, if your development is gunning for POSIX compliance. You should be fine most of the time.

Wednesday, September 30, 2015

Compiling and Using Custom Arch Linux Kernel

In my previous post, I have the need to downgrade my Arch Linux kernel version in order to alleviate ACPI bug in Linux kernel 4.X. After a bit of searching through AUR, I stumbled upon a still maintained kernel 3.X, at least until 2017. This kernel is Linux kernel 3.18. Because it resides in AUR, you must compile this kernel version yourself and install it with pacman. In this post, I will explain the process of compiling and installing custom kernel 3.18 from AUR. These are the steps I carried out to do that:
  1. Clone the kernel git repository with: git clone https://aur.archlinux.org/linux-lts318.git to directory where you're going to build the kernel.
  2. Add Linus Torvalds and Greg KH keys to list of trusted keys in pacman "database" of trusted keys:
    $ sudo pacman-key -r  00411886
    $ sudo pacman-key -r  6092693E
    
  3. Import Linus Torvalds and Greg KH public key to your machine if you haven't done that already.
    $ gpg --recv-keys 79BE3E4300411886
    $ gpg --recv-keys 38DBBDC86092693E
    
    Be patient, as it could take a while searching and importing the keys.
  4. Now, you can proceed to build the kernel in the git-cloned directory in step 1. cd into the directory and makepkg:
    $ makepkg -s
    
    The -s flag is to make sure that all dependencies are downloaded while building the custom kernel package.
  5. Assuming the kernel package build is finished, you can proceed to install the kernel headers and the kernel itself:
    # pacman -U linux-lts318-headers-3.18.20-1-x86_64.pkg.tar.xz
    # pacman -U linux-lts318-3.18.20-1-x86_64.pkg.tar.xz
    
    You are advised to install kernel header first before installing the kernel as mentioned in https://wiki.archlinux.org/index.php/Kernels/Compilation/Arch_Build_System#Installing
  6. Once the kernel is installed, the remaining step is to update your machine bootloader. If you are using systemd bootloader (a.k.a gummiboot), all you need to do is modify /boot/loader/entries directory to include entry for the new kernel. For example: Create a new arch-lts.conf file in /boot/loader/entries with the following contents:
    title  Arch Linux 3.18 LTS AUR
    linux  /vmlinuz-linux-lts318
    initrd  /initramfs-linux-lts318.img
    options  root=PARTUUID=[your_root_partition_UUID] rw
    
That's all you need to do to compile and install custom kernel from AUR. If you want to create your own custom kernel, you need to build your own Arch Linux package. The compilation and installation steps should be similar to what's explained above.

NOTE:
  • If you have dirmngr problem showing up in your shell, it's probably because you haven't run it yet. You should run it as root:
    # dirmngr < /dev/null
    See: Pacman-key#gpg:_keyserver_receive_failed:_No_dirmngr
  • See also GnuPG section in Arch Wiki for more details.
  • If you'd rather prevent future kernel upgrade when upgrading packages via pacman -Syu, you can add this line to /etc/pacman.conf:
    IgnorePkg   = linux
    
    The line will prevent pacman from upgrading the linux kernel (linux package).



Friday, September 11, 2015

Makefile.am "option subdir-objects is disabled" Warning and Fix

The "option subdir-objects is disabled" warning is thrown-up by GNU autotools when Makefile.am builds a source file in a subdirectory located beneath the Makefile.am's file. This is an example warning:
.. Makefile.am:4: warning: source file '$(srcdir)/test.c' is in a subdirectory,
.. Makefile.am:4: but option 'subdir-objects' is disabled
This warning is not fatal. What it means roughly: the generated object file will not be placed inside the same directory as the source code. The full explanation is at https://www.gnu.org/software/automake/manual/html_node/List-of-Automake-options.html#List-of-Automake-options (scroll-down to subdir-objects option). This is the important excerpt:
subdir-objects

If this option is specified, then objects are placed into the subdirectory of the build directory corresponding to the subdirectory of the source file. For instance, if the source file is subdir/file.cxx, then the output file would besubdir/file.o.
Fixing this problem is not hard, you just need to add the subdir-objects option to Makefile.am. Below is an example taken from Makefile.am in one of my project. I placed the statement as the first entry in Makefile.am.
AUTOMAKE_OPTIONS = subdir-objects
### ...
### Other statements
### ...
Hopefully, this helps out those experiencing this problem.

Saturday, September 5, 2015

Multithreaded Libevent/Libevent2 Server (Code Commentary)

I've been playing with libevent v2.x for a while. In the course of the "experiments", I come across Ron Cemer's  multithreaded libevent implementation (see http://roncemer.com/software-development/multi-threaded-libevent-server-example/ and http://sourceforge.net/projects/libevent-thread/) and also an updated version from Qi Huang (see http://randomindigits.blogspot.co.id/2012/11/libevent-2x-multithreaded-socket-server_4.html and https://github.com/gambellhq/samples/tree/master/libevent2-thread-code).

These multithreaded libevent code are mostly just fine except in one department, i.e. in signal handling. Both versions calls pthread functions from inside the signal handler (albeit not directly). The said practice is unsafe. It should have handle signals in synchronous pattern instead of asynchronous pattern. What I meant by this is the code should have used sigwait() in a dedicated thread to wait for signal(s) and call pthread functions from there instead of calling pthread function inside signal handler which is unsafe.

If you've been "system" programmer, you should already know that signal handler in *NIX runs in a different execution context compared to "normal" threads execution context. Therefore, you shouldn't call anything not prescribed beforehand outside of that "not-normal" execution context.

Now, let's see what exactly I mean (see echoserver_threaded.c and workqueue.c in Qi Huang code):
int runServer(void) {
    ...
    /* Set signal handlers */
    sigset_t sigset;
    sigemptyset(&sigset);
    struct sigaction siginfo = {
        .sa_handler = sighandler,
        .sa_mask = sigset,
        .sa_flags = SA_RESTART,
    };
    sigaction(SIGINT, &siginfo, NULL);
    sigaction(SIGTERM, &siginfo, NULL);
    ...
}
...
static void sighandler(int signal) {
    fprintf(stdout, "Received signal %d: %s.  Shutting down.\n", signal,
            strsignal(signal));
    killServer();
}
....
void killServer(void) {
    fprintf(stdout, "Stopping socket listener event loop.\n");
    if (event_base_loopexit(evbase_accept, NULL)) {
        perror("Error shutting down server");
    }
    fprintf(stdout, "Stopping workers.\n");
    workqueue_shutdown(&workqueue);
}
...
void workqueue_shutdown(workqueue_t *workqueue) {
    worker_t *worker = NULL;
    ...
    /* Remove all workers and jobs from the work queue.
     * wake up all workers so that they will terminate. */
    pthread_mutex_lock(&workqueue->jobs_mutex);
    ...
    pthread_cond_broadcast(&workqueue->jobs_cond);
    pthread_mutex_unlock(&workqueue->jobs_mutex);
}
As you can see in the excerpt above, the sighandler() function calls pthread functions indirectly. This practice is discouraged for sure because as per POSIX standard, pthread calls from signal handler is undefined. Therefore, it's unsafe to do so.

Anyway, feel free to challenge this view :-)

Sunday, August 30, 2015

Arch Linux Brightness Button Problem in Kernel 4.X

Ever since I updated my Arch Linux installation to version using kernel 4.X series, the brightness button in my Lenovo laptop (Ideapad Flex 14) is not working properly anymore. At first, I thought it's merely a configuration problem, but it turns out there's more to it than that. Let's break it down..
  • First and foremost, the laptop uses Nvidia Optimus configuration, i.e. Intel integrated graphics + NVidia discrete GPU. However, this doesn't preclude the (ACPI) brightness buttons from working just fine under Linux kernel 3.X. 
  • Upon boot, only intel_backlight is loaded. In kernel 3.X both intel_backlight and acpi_backlight "modules" are loaded. However, after trying this workaround, it's still not working as expected. Note: I used this to modify the kernel boot parameter. 
  • The symptoms of the brightness button malfunction as follows: The button is not exactly not working, it's merely the response time for a button press to be registered in the kernel takes a few seconds. As for the brightness-level setting in /sys/class/backlight/intel_backlight is just fine. 
This bug is not entirely a bad thing, but it's irritating. For the time being, I stay away from using bleeding-edge Arch Linux with kernel 4.X and uses LTS kernel instead (https://www.archlinux.org/packages/core/x86_64/linux-lts/). However, Greg KH mentioned that the LTS kernel will ceases support next year. We'll see what option I have by then. Maybe, I'll just remap other Laptop keys or find some other work around.

------UPDATE-----

Arch Linux LTS Kernel has moved to kernel 4.1.X over the weekend. I tried to use this recommended kernel version. But it turns out the ACPI-related problem I mentioned above also exists in this kernel version. I've been thinking to use one of the kernels from from AUR, but then ultimately decided to just downgrade the kernel. It turns out that it's much easier than what I thought before. This is what I did (in chronological order):
  1. I have upgraded my Arch Linux to kernel 4.1.X LTS. Therefore, I boot into this LTS kernel version. 
  2. Log in as root and then carry-out the downgrade procedure as explained at https://wiki.archlinux.org/index.php/Downgrading_packages#Downgrading_the_kernel. There are only two packages that I need to downgrade, i.e. the Linux kernel and the Linux kernel headers. Therefore, I use this command:
    pacman -U linux-lts-3.14.52-1-x86_64.pkg.tar.xz linux-lts-headers-3.14.52-1-x86_64.pkg.tar.xz
    
    and everything went without a hitch. The machine works just like before the upgrade. The ACPI hot-keys which were not functioning properly in kernel 4.X are now working like it used to be. I'm a bit worried about systemd compatibility at this point but it seems that everything works as they should.
------UPDATE 2-----
I finally decided to use one of the custom kernel from AUR because it's a stable release that will go through maintenance period at least up-to 2017. See: Compiling and Using Custom Arch Linux Kernel

Friday, August 14, 2015

Very Simple Libmemcached Sample Code

Libmemcached documentation can be a bit overwhelming for those new to memcached client library. The sample code below shows a very simple libmemcached usage sample. It assumes that you have libmemcached >= v1.0 in your machine installed. The code comes with no warranty whatsoever, use it a your own risk. Here comes the code:
#include < stdio.h >
#include < stdint.h >
#include <libmemcached-1.0/memcached.h >
#include < stdlib.h >

#define DEFAULT_PORT 7500

int main (int argc, char *argv[])
{
  memcached_return_t rc;
  char * value; 
  char buffer[1024];
  int length= snprintf(buffer, sizeof(buffer), "--server=localhost:%d", DEFAULT_PORT);
  char key[] = "my_key";
  char obj_value[] = "my value";
  size_t obj_val_len;

  memcached_st *memc= memcached(buffer, length);
  if (memc == NULL) {
     printf("Error: Failed to allocate memcached_st object\n");
  }

  rc = memcached_set(memc, key, strlen(key), obj_value, strlen(obj_value) + 1, 0, 0);
  if (rc != MEMCACHED_SUCCESS) {
     printf("Error: Failed to set memcached object value\n");
  }

  value = memcached_get(memc, key, strlen(key), &obj_val_len, 0, &rc);
  if (value == NULL) {
     printf("Error: Failed to read object value\n");
  }
  
  if (MEMCACHED_SUCCESS == rc) {
     printf("Object contents = %s\n", value);
  } else {
     printf("Error: Failed to read object value correctly\n");
  } 

  if (value != NULL) {
     free(value);
  }

  memcached_free(memc);

  return 0;
}
The sample codes simply stores a value ("my value" string) into memcached server running in localhost at port 7500 and read it back to make sure the value stored correctly. This sample code assumes you're running memcached server in your local machine and make it listen at port 7500. This is how I do that:
me@darkstar $ ./memcached -l 127.0.0.1 -p 7500
If you want to change the port, simply change the DEFAULT_PORT definition.

Anyway, the purpose of this post is as a very gentle introduction to libmemcahed. Head over to http://docs.libmemcached.org/index.html for more details.

Saturday, July 25, 2015

Modifying Gummiboot Configuration

Modifying gummiboot configuration (at least in Arch Linux) is quite easy. Upon gummiboot execution (when boot menu is displayed), you can press h to show the gummiboot configuration "help" as shown below.
The screen shot above shows the gummiboot help menu in the lower center of my laptop display. These are the key bindings:

  • d (lower case) sets the currently highlighted menu entry as the default boot OS (or UEFI application) on boot.
  • h (lower case) shows the help menu in the bottom part of the display. Just like shown in the screen shot above.
  • t (lower case) increments the timeout to execute the default boot menu.
  • T (upper case) decrements the timeout to execute the default boot menu.
  • p (lower case) prints "something?". I haven't test what "print" exactly means here.
Therefore, to change the default OS/UEFI application to be automatically executed on timeout, you just need to select the menu entry you want and then press d in gummiboot. The newly set default menu should be effective immediately and preserved upon reboot/shutdown. Anyway, this post is further elaboration from: Modifying Gummiboot Configuration.

Sunday, July 12, 2015

Cross-Compiling Raspberry Pi Application from Windows with CodeBlocks

Let's start with problem definition: Raspberry Pi is too slow for most complex software compilation/build process. Therefore, we need something much more powerful. "Unfortunately" for me, I'm left with a Windows 8.1 Professional machine due to my day job with Micro$oft stuff as that something much more powerful. But, never mind, there's a solution for that platform problem. My machine is quite powerful, a Core i5 4200U (2.xx GHz @turboboost) with an 8GB RAM.

Anyway, there's a quite mature GNU Toolchain for this cross compilation task, kindly provided by Sysprogs: http://gnutoolchains.com/raspberry/. It even comes with the tutorial to use it: http://gnutoolchains.com/raspberry/tutorial/. However, it doesn't explain how to use the cross toolchain in CodeBlocks because it expect you to use Visual Studio. Well, Visual Studio is just way too resource hungry for my taste. Therefore, let's find out how to use the cross toolchain with CodeBlocks.

Footnote:
-------------
- This post is incomplete. However, I decided to post it as it could help as starting point for those really looking into doing this kind of thing. I've left Windows for about a year now. Therefore, this has no relevance to me as of now.

Thursday, July 9, 2015

Building Memcached with Statically-Linked Libevent

Building memcached with statically-linked libevent is not quite complicated. However, I found out that we have to make slight adjustment to memcached build system and header. In some scenario, statically linking memcached to libevent is the preferred solution in order to remove the headache associated with maintaining different versions of memcached-libevent combination.

Well, actually, I have a working combination of memcached 1.4.24 and libevent 2.0.22-stable. But, it's very dirty at the moment. I'll eventually release it in Github.

Anyway, these are what you need to do to make memcached linked into statically built libevent:
  1. Place libevent source code inside memcached root directory (the next steps assumes you placed libevent source code inside memcached root directory).
  2. Make the libevent code statically built by using LT_INIT([disable-shared]) in its configure.ac
  3. Make memcached only link to libevent_core.la in memcached Makefile.am. Nothing more than that, because that's all it needs.
  4. Add AC_CONFIG_SUBDIRS([your_libevent_dir]) to memcached configure.ac. 
  5. Disable/remove support for dynamically-linked libevent from memcached configure.ac. 
  6. In my particular memcached version, I need to modify memcached.h to add two additional libevent include files to make it compile-able, i.e. event_struct.h and event_compat.h. 
Hopefully, this is enough as hints for those wanting to statically link libevent to memcached. I'll update this post once the code is up in GitHub.

Thursday, April 16, 2015

Illegal Instruction in IBM iSeries (AS400) PASE

Perhaps, most of those who had develop application(s) in iSeries PASE have encountered this very irritating error: "illegal instruction .. bla bla bla .." . Usually, the error shows up at runtime, not when compiling code on PASE. I was very confused back then too. However, upon closer inspection I found out that the culprit is mostly down to one of these facts:
  1. An operating system API defined in AIX also present in iSeries PASE header files but the API is not implemented by PASE runtime. This is how you might encounter this bug:
    • The compiler doesn't complain when you compile your program code due to  the presence of unaltered AIX header in PASE, even though  PASE runtime doesn't implement the said API.
    • At runtime, the runtime loader (and "linker") complained that the said API doesn't exist in the OS. This is the source of said "illegal instruction .. bla bla bla .." error.
  2. NULL pointer value. If your program code encounter NULL pointer at runtime, the OS400 PASE runtime will crash your program via the "illegal instruction .. bla bla bla ..". 
I have to give credit (sic) to IBM that the error made me nervous back then as I thought the compiler I used + IBM supplied linker "combination" created wrong binary which causes the illegal instruction to be emitted upon linking. Only upon closer inspection that I found the culprits explained above. 

Maybe, you want to ask me: then how the hell am I going to be sure the API I needed is supported or not?
Well, you can consult IBM Redbooks. But, AFAIK, you have to test your binary to be 100% sure because there's no detailed explanation on which API are supported and which are not for recent iSeries OS versions (>= iSeries v5.3).

Friday, March 20, 2015

Shared Memory in Unix - System V vs POSIX API

Many Linux/Unix developers these days already forget the "Unix" war of the 80s and 90s, whereby System V camp (i.e. the "commercial" camp) fought against the opensource (BSD) camp. Recently, I found the relic of those days, lingering in IBM Unix-like API. The system I worked on is not exactly Unix per se, but it has compatibility APIs for IBM AIX API.

Enough with the background story. Now, the task at hand need me to craft a solution requiring shared memory and semaphore API. I was surprised that the said environment doesn't support shm_open() POSIX API. Upon further scrutiny, the system doesn't support all shm_XXX() API. This is where I took a step back and try to look back into the platform history to get the big picture. Then I realize that this system is what was once described as System V compliant. This "standard" predates POSIX to some extent, before Linux and other BSD-derived Unix took to the enterprise. In those times, the enterprise Unix market was mostly served by big Unix vendors, i.e. IBM, HP, Sun, SCO, etc. That was when the System V standard were "ratified" for their customers. Now, back to the problem. If shm_XXX() POSIX APIs are not supported, then what API I'm supposed to use? The answer is the shmXXX() API. Instead of shm_open() and such, you get shmget() and such. Linux and most BSD-derived Unix of today also support the System V API as well. So, these System V APIs could be thought of as portable among most operational Unix system of today.

These are several links explaining how to use the System V shmXXX() family of APIs:
Hopefully, this saves the days of those poor souls (who were like me) trying to use shared memory in "Legacy" Unix-like systems. 

Tuesday, March 10, 2015

GLSL 3.30 in Intel Haswell CPU

This error could show up while trying to run your OpenGL program that uses GLSL on Haswell:
$ error GLSL 3.30 not supported.. 
It's very probably because you don't specifically ask the OpenGL implementation (in this case mesa) for Core Profile because only Haswell OpenGL core profile supports GLSL 3.30, as shown in this Haswell glxinfo dump:
....
OpenGL vendor string: Intel Open Source Technology Center
OpenGL renderer string: Mesa DRI Intel(R) Haswell Mobile 
OpenGL core profile version string: 3.3 (Core Profile) Mesa 10.4.5
OpenGL core profile shading language version string: 3.30
OpenGL core profile context flags: (none)
OpenGL core profile profile mask: core profile
....
OpenGL version string: 3.0 Mesa 10.4.5
OpenGL shading language version string: 1.30
OpenGL context flags: (none)
....
As you can see, the non-core profile only supports up-to GLSL 1.30. Now, how do you go about asking core profile in your OpenGL application? If you are using freeglut, you can use these functions:

...
#include <gl\freeglut.h>
...
int main(int argc, char * argv[])
{
...
glutInitContextVersion(3,3);
glutInitContextProfile(GLUT_CORE_PROFILE);
...
}
Remember that the initialization code above must be called when you initialize your OpenGL environment. Also, it only applies if you use freeglut library.

Sunday, March 8, 2015

Using Windows Key in Fluxbox

It's easy to add Windows-specific key to Fluxbox. All you have to do is as follows:
  1. Find out how Xorg server recognizes the key. You can use xev for this. You can run xev from xterm or other terminal and observe the name of the specific-key shown in xev log. For example, the Windows key in my laptop is recognized as Super_L by Xorg.
  2. Add the key to ~/.fluxbox/keys file. 
Let me give you an example. Let's add Windows key (Super_L) to my Fluxbox keys configuration file and let the key invokes Fluxbox "root" menu when pressed. This is the snippet from ~/.fluxbox/keys file for this function to work.
#  Windows Key -- show root menu
Super_L :RootMenu

That's it. That's all you need to make Windows key to work as you expected (analog to how the Windows key work in Windows).

Wednesday, March 4, 2015

Adding Lock Screen to Fluxbox

It's easy to add lock screen to Fluxbox. You have many options but I'll stick to xscreensaver because it's the most bug free and standardized. For those using Arch Linux, just "pacman -S xscreensaver" and you get all that you need to run xscreensaver.

Adding the menu to Fluxbox is easy. I added the lock screen menu to ~/.fluxbox/usermenu. The additional menu entry as follows:
[separator]      
[exec] (Lock Screen) {xscreensaver-command -lock} 
[separator]      
I added the separator to make the menu entry stand-out, to reduce the possibility to inadvertently click it. This is how it looks like in my Fluxbox "root" menu (the lock screen menu entry color is inverted for emphasis):

You have to start the xscreensaver "server" upon starting Xorg. In my case, I add the following line to ~/.xprofile:
xscreensaver -no-splash &
Pay attention that you have to make sure that ~/.xprofile is parsed by ~/.xinitrc when you start Xorg. This is my ~/.xinitrc:
#!/bin/sh

# Make sure this is before the 'exec' command or it won't be sourced.
[ -f /etc/xprofile ] && source /etc/xprofile
[ -f ~/.xprofile ] && source ~/.xprofile

# Parse .Xresources
[[ -f ~/.Xresources ]] && xrdb -merge ~/.Xresources

exec startfluxbox
Hopefully this is useful for others using Fluxbox out there.

Monday, March 2, 2015

Adding New Menu Entries to Fluxbox

In this post I assume that you're using Fluxbox 1.3.7 or newer. In this version of Fluxbox, adding new menu entries to the "root menu" (the one shown when you right click on the desktop) consists of these steps:
  1. Add the menu entries in ~/.fluxbox/usermenu
  2. Run fluxbox-generate_menu
Editing ~/.fluxbox/menu directly is not recommended. However, I recommend you to read the file to get an idea about the syntax to create menu entries and submenus. Therefore, we would just add the new menu entries in usermenu file. This is an example:
      [exec] (Evince Reader) {evince} 
      [exec] (File Manager) {pcmanfm} 
Usually, usermenu is empy, unless you have alter it previously. The configuration above means I added two entries, i.e. Evince Reader and pcmanfm file manager to the root menu. Once usermenu file is ready as shown in the configuration file above, I run fluxbox_generate-menu to create the new root menu in Fluxbox. This is the result:

As you can see, the new menu entries are now integrated into fluxbox root menu.

Saturday, February 28, 2015

Fullscreen Urxvt in Arch Linux (Fluxbox)

Contrary to those mentioned at https://bbs.archlinux.org/viewtopic.php?pid=1155345#p1155345 and https://wiki.archlinux.org/index.php/rxvt-unicode#Fullscreen, you don't need the package from AUR (urxvt-fullscreen) to run Urxvt in fullscreen mode if you are using Fluxbox as your window manager.

In Fluxbox, you can use Alt+F11 to make all X applications to switch to "fullscreen". Therefore, just press Alt+F11 and your Urxvt should switch to fullscreen. I found this inadvertently while trying to switch another application to fullscreen. I didn't realize that Urxvt was under the "focus" at that point. Nonetheless, it's very useful because I don't need additional package to make it work.

The "fullscreen" moniker here is rather tentative because what happens is Fluxbox switch the window that's currently in focus to occupy the entire screen (a.k.a maximized) and at the same time disabling its window decoration. This is effectively the same as what you would expect from a "fullscreen-ed" application, i.e. applications that supports fullscreen mode such as video player.

As a bonus, this is my .Xresources snippet that shows how to make Urxvt "transparent" and at the sametime removes the scrollbar (which you won't need if you're using tmux or screen):
URxvt.depth: 32
URxvt.foreground: rgba:eeee/eeee/eeee/ffff
URxvt.background: rgba:0000/0000/0000/cccc
URxvt.cursorBlink: True 
URxvt.cursorUnderline: True 
URxvt.scrollBar: False 
Well, you might want to see the result. The screen-shot is shown below.

Urxvt (+tmux) in its glory

As you can see, the result is quite entertaining to see :-)

Sunday, February 22, 2015

Windows - The object invoked has disconnected from its client (Partial Fix)

I got this particular error message:
"The object invoked has disconnected from its client"
when trying to log into my Windows 8.1 machine locally.

This error is particularly debilitating because I cannot log in to my machine even from confirmed accounts that I used either for day to day tasks or administrative tasks. The culprit turns out to be a USB flash disk left plugged on one of the machine's USB socket. The fix is very simple, just remove the "offending" USB flashdisk and then all-is-well.
The USB flash disk in question contains a valid UEFI bootable OS along with the mandatory EFI partition to boot an UEFI-compliant OS. It seems to be some kind of check in Windows "chain of trust" detects this "irregularity" as breaking the "chain of trust" when Windows boot. Therefore, Windows decided that this is a malicious login attempt and blocks access into it. However, somehow the protection mechanism ends up with very uninformative message.


I list my analysis as "Partial Fix" because it works for my particular Windows 8.1 setup but might not work for other Windows versions.

Intel PCH (Haswell) Sound (ALSA) Problem and Fix for Arch Linux

The main problem fixed by the method explained here as follows: 
1. The kernel sound modules loaded just fine and able to recognize all sound-related chips just fine.
2. The Intel PCH platform (in my case Haswell) seems to be perfectly fine in loading modules. Moreover, checks via alsautils programs seems to work as well.
3. There's no sound coming out of the analog output. The analog output is usually the ones which a laptop speaker is connected and the one which you put earphone jack into. If you are using HDMI output from your Intel platform. You might not need this fix.

The "hint" that problem exists is usually failure when running:
$ speaker-test
with rather cryptic error message akin to "unable to open XXXX file"

This fix is focused on my Haswell Laptop but it could also work on other Intel systems with PCH as well. The basic idea comes from this post: https://bbs.archlinux.org/viewtopic.php?id=180102. The only different thing is details of the hardware. The basic idea of the fix is: to force ALSA to reorder the module loading so as to make the sound chip controlling the analog output becomes the default output.
This fix ensures that the sound coming out of the default sound device is coming out of the analog output instead of the HDMI output. Anyway, you should have a sort of hint on which PCI device in your machine that controls the analog audio output. Usually, in Intel PCH platform (at least in Haswell), the ones that controls the analog output doesn't connect to the CPU directly, but rather to the PCH because the CPU (i.e. the HDMI device in the CPU chip) only connects to HDMI output. You should consult the system block diagram to ensure about this.

The fix consists of a *.conf file in /etc/modprobe.d. This file instructs the kernel module loader, i.e. the ALSA subsystem to reorder the sound modules so as to make the PCH audio device the "default" one (having index 0). In my case, I name the *.conf file: /etc/modprobe.d/alsa-base.conf. This is the contents of my /etc/modprobe.d/alsa-base.conf:
# Intel PCH
options snd-hda-intel index=0  model=auto vid=8086 pid=9c20
# Intel HDMI 
options snd-hda-intel index=1  model=auto vid=8086 pid=0a0c

The meaning of the module configuration file above is to set PCI device with Vendor IDentifier (VID) 8086 and PID 9C20, i.e. Intel PCH  as the default sound device (with index value set 0). You can find your specific vid and pid with this following command:
$ lspci -nn | grep -i audio
00:03.0 Audio device [0403]: Intel Corporation Haswell-ULT HD Audio Controller [8086:0a0c] (rev 09)
00:1b.0 Audio device [0403]: Intel Corporation 8 Series HD Audio Controller [8086:9c20] (rev 04)

This is the output of aplay -l in my system after I applied the fix:
$ aplay -l
**** List of PLAYBACK Hardware Devices ****
card 0: PCH [HDA Intel PCH], device 0: ALC3239 Analog [ALC3239 Analog]
  Subdevices: 0/1
  Subdevice #0: subdevice #0
card 1: HDMI [HDA Intel HDMI], device 3: HDMI 0 [HDMI 0]
  Subdevices: 1/1
  Subdevice #0: subdevice #0
As you can see, the PCH audio device has become the "default" audio device (card 0). With the fix, speaker-test no longer stops with "unable to open XXXX file" but outputs noise sounds from the speaker. This is an excerpt of speaker-test log from my terminal:
$ speaker-test

speaker-test 1.0.28

Playback device is default
Stream parameters are 48000Hz, S16_LE, 1 channels
Using 16 octaves of pink noise
Rate set to 48000Hz (requested 48000Hz)
Buffer size range from 2048 to 16384
Period size range from 1024 to 1024
Using max buffer size 16384
Periods = 4
was set period_size = 1024
was set buffer_size = 16384
 0 - Front Left
Time per period = 2.657561
 0 - Front Left
Time per period = 2.986723
 0 - Front Left
I hope this also helps those experiencing similar problems out there. Ciao.

Sunday, February 15, 2015

Zsh-Tmux Configuration for (Arch) Linux

I've been doing some testing in order to migrate to use tmux and zsh for my daily work on Linux. This post will show my configuration on both of them. First up is .zshrc:
autoload -U compinit promptinit
compinit
promptinit

# This will set the default prompt to the walters theme
prompt walters

zstyle ':completion:*' menu select
setopt completealiases
setopt HIST_IGNORE_DUPS

alias tmux='tmux -u'

# create a zkbd compatible hash;
# to add other keys to this hash, see: man 5 terminfo
typeset -A key

key[Home]=${terminfo[khome]}
key[End]=${terminfo[kend]}
key[Insert]=${terminfo[kich1]}
key[Delete]=${terminfo[kdch1]}
key[Up]=${terminfo[kcuu1]}
key[Down]=${terminfo[kcud1]}
key[Left]=${terminfo[kcub1]}
key[Right]=${terminfo[kcuf1]}
key[PageUp]=${terminfo[kpp]}
key[PageDown]=${terminfo[knp]}

# setup key accordingly
[[ -n "${key[Home]}"     ]]  && bindkey  "${key[Home]}"     beginning-of-line
[[ -n "${key[End]}"      ]]  && bindkey  "${key[End]}"      end-of-line
[[ -n "${key[Insert]}"   ]]  && bindkey  "${key[Insert]}"   overwrite-mode
[[ -n "${key[Delete]}"   ]]  && bindkey  "${key[Delete]}"   delete-char
[[ -n "${key[Up]}"       ]]  && bindkey  "${key[Up]}"       up-line-or-history
[[ -n "${key[Down]}"     ]]  && bindkey  "${key[Down]}"     down-line-or-history
[[ -n "${key[Left]}"     ]]  && bindkey  "${key[Left]}"     backward-char
[[ -n "${key[Right]}"    ]]  && bindkey  "${key[Right]}"    forward-char
[[ -n "${key[PageUp]}"   ]]  && bindkey  "${key[PageUp]}"   beginning-of-buffer-or-history
[[ -n "${key[PageDown]}" ]]  && bindkey  "${key[PageDown]}" end-of-buffer-or-history


# Finally, make sure the terminal is in application mode, when zle is
# active. Only then are the values from $terminfo valid.
if (( ${+terminfo[smkx]} )) && (( ${+terminfo[rmkx]} )); then
    function zle-line-init () {
  printf '%s' "${terminfo[smkx]}"
 }

 function zle-line-finish () {
  printf '%s' "${terminfo[rmkx]}"
 }
 zle -N zle-line-init
 zle -N zle-line-finish
fi

I lay no claims on this configuration because it's a mix-and-match result of several configuration. Mainly from Arch Linux wiki. It worked without problems though. Next up is .zshenv:
typeset -U path
path=(~/bin $path)
So. Now hopefully you get working zsh configuration. The comments in the config files are self-explanatory. Next up is my .tmux.conf:
#Prefix is Ctrl-a
set -g prefix C-a
bind C-a send-prefix
unbind C-b

# set shell
set -g default-shell /bin/zsh

set -sg escape-time 1
set -g base-index 1
setw -g pane-base-index 1

#Mouse works as expected
setw -g mode-mouse on
set -g mouse-select-pane on
set -g mouse-resize-pane on
set -g mouse-select-window on

setw -g monitor-activity on
set -g visual-activity on

set -g mode-keys vi
set -g history-limit 10000

# y and p as in vim
bind Escape copy-mode
unbind p
bind p paste-buffer
bind -t vi-copy 'v' begin-selection
bind -t vi-copy 'y' copy-selection
bind -t vi-copy 'Space' halfpage-down
bind -t vi-copy 'Bspace' halfpage-up

# extra commands for interacting with the ICCCM clipboard
bind C-c run "tmux save-buffer - | xclip -i -sel clipboard"
bind C-v run "tmux set-buffer \"$(xclip -o -sel clipboard)\"; tmux paste-buffer"

# easy-to-remember split pane commands
bind | split-window -h
bind - split-window -v
unbind '"'
unbind %

# moving between panes with vim movement keys
bind h select-pane -L
bind j select-pane -D
bind k select-pane -U
bind l select-pane -R

# moving between windows with vim movement keys
bind -r C-h select-window -t :-
bind -r C-l select-window -t :+

# resize panes with vim movement keys
bind -r H resize-pane -L 5
bind -r J resize-pane -D 5
bind -r K resize-pane -U 5
bind -r L resize-pane -R 5
Well, this one is also quite self-explanatory. At the very least this provides you with basic command completion among others and tmux that recognize zsh. Also, it remaps C-b to C-a so that it's easier to reach in QWERTY keyboard.
This is how they look like at work:
A bit of explanation on the 'tmux -u' alias. The terminal application that I'm using somehow cannot recognize the vertical line delimiter correctly in tmux unless I force it to assume that it know UTF-8 encoding. If not forced, it will use 'x' as the delimiter which was annoying.

Friday, January 30, 2015

CCSIDs a.k.a Code Pages in IBM PASE for i

It all started with iconv() function in iSeries PASE. I need to figure out how to convert strings back and forth between iSeries PASE and ILE programs. I didn't realize at first that there were TWO instead of one libiconv library installed on the iSeries I worked with. One from GNU and one from IBM (the original one). I had to specify the one I intend to use specifically in my Makefile.am as follows:

 pase_to_rpg_string_LDADD = /QOpenSys/QIBM/ProdData/OS400/PASE/lib/libiconv.a

Well, then it turned out, the code page "string" specified for ILE C didn't work out. I found out that the code page string should be as shown here. I'm puzzled as to where that guy found the code page string. Then, I finally found that it was AIX code page specification after all, as this link shows. I reproduce it below, just in case (and hopefully IBM doesn't mind ;).

Table 1. CICS Shortcodes, CCSIDs and code pages that TXSeries for Multiplatforms supports
CICS short code nameCCSIDAIX® and Windows code pageHP–UX code page nameSolaris code page nameDescription
3737IBM-037american_eIBM-037IBMLatin-1 EBCDIC
8859-1819ISO8859-1iso8859_18859Latin-1 ASCII (ISO)
819819ISO8859-1iso8859_18859Latin-1 ASCII (IBM/ISO)
850850IBM-850roman8IBM-850Latin-1 ASCII
437437IBM-437iso8859_1IBM-437Latin-1 (PC) ASCII
930930IBM-930cp930IBM-930Japanese EBCDIC
931931IBM-931japanese_eIBM-931Japanese EBCDIC
939939IBM-939cp939IBM-939Japanese EBCDIC
932932IBM-932sjisja_JP.pckJapanese ASCII
EUCJP954IBM-eucJPeucJPeucJPJapanese ASCII (ISO)
942942IBM-942IBM-942IBM-942Japanese ASCII
943943IBM-943IBM-943IBM-943Japanese ASCII
EUCKR970IBM-eucKReucKReucKRKorean ASCII (ISO)
934934IBM-934IMB-934IBM-934Korean ASCII
944944IBM-944IBM-944IBM-944Korean ASCII
949949IBM-949korean15IBM-949Korean ASCII
933933IBM-933korean_eIBM-933Korean EBCDIC
EUCTW964IBM-eucTWIBM-eucTWeucTWTraditional Chinese
938938IBM-938IBM-938IBM-938Traditional Chinese ASCII
948948IBM-948IBM-948IBM-948Traditional Chinese ASCII
937937IBM-937chinese-t_eIBM-937Traditional Chinese EBCDIC
BIG5950Zh_TW.big5big5zh_TW.BIG5Traditional Chinese BIG5
946946IBM-946IBM-946IBM-946Simplified Chinese ASCII
13811381IBM-1381hp15CNIBM-1381Simplified Chinese ASCII
935935IBM-935chinese-s_eIBM-935Simplified Chinese EBCDIC
EUCN1383IBM-eucCNchinese-s_eeucCNSimplified Chinese ASCII (ISO)
GB180305488GB18030gb18030GB18030Simplified Chinese GB18030
864864IBM-864arabic8IBM-864Arabic ASCII
8859-61089ISO8859-6iso8859_6ISO8859-6Arabic ASCII (ISO)
10891089ISO8859-6iso8859_6ISO8859-6Arabic ASCII (IBM/ISO)
420420IBM-420arabic_eIBM-420Arabic EBCDIC
855855IBM-855IBM-855IBM-855Cyrillic ASCII
866866IBM-866IBM-866IBM-866Cyrillic ASCII
8859-5915ISO8859-5iso8859_5ISO8859-5Cyrillic ASCII (ISO)
915915ISO8859-5iso8859_5ISO8859-5Cyrillic ASCII (IBM/ISO)
10251025IBM-1025IBM-1025IBM-1025Multilingual Cyrillic EBCDIC
869869IBM-869greek8IBM-869Greek ASCII
8859-7813ISO8859-7iso8859_7ISO8859-7Greek ASCII (ISO)
813813ISO8859-7iso8859_7ISO8859-7Greek ASCII (IBM/ISO)
875875IBM-875greek_eIBM-875Greek EBCDIC
856856IBM-856hebrew8IBM-856Hebrew ASCII
8859-8916ISO8859-8iso8859_8ISO8859-8Hebrew ASCII (ISO)
916916ISO8859-8iso8859_8ISO8859-8Hebrew ASCII (IBM/ISO)
424424IBM-424hebrew_eIBM-424Hebrew EBCDIC
273273IBM-273german_eIBM-273Austria, Germany EBCDIC
277277IBM-277danish_eIBM-277Denmark, Norway EBCDIC
278278IBM-278finnish_eIBM-278Finland, Sweden EBCDIC
280280IBM-280italian_eIBM-280Italy EBCDIC
284284IBM-284spanish_eIBM-284Spain, Latin Am.(Sp) EBCDIC
285285IBM-285english_eIBM-285UK EBCDIC
297297IBM-297french_eIBM-297France EBCDIC
500500IBM-500IBM-500IBM-500International latin-1 EBCDIC
871871IBM-871icelandic_eIBM-871Iceland EBCDIC
852852IBM-852IBM-852IBM-852Latin-2 ASCII
8859-2912ISO8859-2iso8859_2ISO8859-2Latin-2 ASCII (ISO)
912912ISO8859-2iso8859_2ISO8859-2Latin-2 ASCII (IBM/ISO)
870870IBM-870IBM-870IBM-870Latin-2 EBCDIC
857857IBM-857turkish8IBM-857Turkey ASCII
8859-9920ISO8859-9iso8859_9ISO8859-9Turkey ASCII (ISO)
920920ISO8859-9iso8859_9ISO8859-9Turkey ASCII (IBM/ISO)
10261026IBM-1026turkish_eIBM-1026Turkey EBCDIC
UTF-81208UTF-8 (only)UTF-8UTF-8Unicode file code set
UCS-21200UCS-2 (only)UCS-2UCS-2Unicode processing code set

So, next time, you code on iSeries PASE, use "IBM-037" (EBCDIC code page) and "ISO8859-1" (ASCII 8-bit) strings to convert back and forth.

I hope this post will somehow save somebody someday.