Sunday, October 9, 2016

"Replacing" C++ Virtual Function with Template (and More)

There are several ways you can replace C++ virtual function with template. These are some related examples to accomplish the task:
Curiously recurring template pattern

However, I found that the philosophy of using virtual function itself is quite "flawed" when one already use template in his/her C++ code. Why? Because, the Standard Template Library (STL) or Boost, or other C++ template library for that matter has a very different approach to programming than Object Oriented (OO) philosophy. Most if not all of them are meant to provide generic programming in C++ (as opposed to OO)--Generic as in ADA ( and in general I might be stating a hard line here, but nonetheless, that was what STL was written for. See for yourself what Alexander Stepanov (the most prominent STL author) wrote:

What I stated in the previous paragraph meant that, in order to use template in a substantial C++ code base, we need a paradigm shift. Instead of looking at the solution as related objects, I think we need to look at the solution as "interfaces to generic algorithms". I think that you should gain more understanding of what I meant once you've read Stepanov remark in his notes. This is an important excerpt from his notes:
It is essential to know what can be done effectively before you can start your design. Every programmer has been taught about the importance of top-down design. While it is possible that the original software engineering considerations behind it were sound, it came to signify something quite nonsensical: the idea that one can design abstract interfaces without a deep understanding of how the implementations are supposed to work. It is impossible to design an interface to a data structure without knowing both the details of its implementation and details of its use. The first task of good programmers is to know many specific algorithms and data structures. Only then they can attempt to design a coherent system. Start with useful pieces of code. After all, abstractions are just a tool for organizing concrete code.
If I were using top-down design to design an airplane, I would quickly decompose it into three significant parts: the lifting device, the landing device and the horizontal motion device. Then I would assign three different teams to work on these devices. I doubt that the device would ever fly. Fortunately, neither Orville nor Wilbur Wright attended college and, therefore, never took a course on software engineering. The point I am trying to make is that in order to be a good software designer you need to have a large set of different techniques at your fingertips. You need to know many different low-level things and understand how they interact.
The most important software system ever developed was UNIX. It used the universal abstraction of a sequence of bytes as the way to dramatically reduce the systems’ complexity. But it did not start with an abstraction. It started in 1969 with Ken Thompson sketching a data structure that allowed relatively fast random access and the incremental growth of files. It was the ability to have growing files implemented in terms of fixed size blocks on disk that lead to the abolition of record types, access methods, and other complex artifacts that made previous operating systems so inflexible. (It is worth noting that the first UNIX file system was not even byte addressable – it dealt with words – but it was the right data structure and eventually it evolved.) Thompson and his collaborators started their system work on Multics – a grand all-encompassing system that was designed in a proper top-down fashion. Multics introduced many interesting abstractions, but it was a still-born system nevertheless. Unlike UNIX, it did not start with a data structure!
One of the reasons we need to know about implementations is that we need to specify the complexity requirements of operations in the abstract interface. It is not enough to say that a stack provides you with push and pop. The stack needs to guarantee that the operations are taking a reasonable amount of time – it will be important for us to figure out what “reasonable” means. (It is quite clear, however, that a stack for which the cost of push grows linearly with the size of the stack is not really a stack – and I have seen at least one commercial implementation of a stack class that had such a behavior – it reallocated the entire stack at every push.) One cannot be a professional programmer without being aware of the costs of different operations. While it is not necessary, indeed, to always worry about every cycle, one needs to know when to worry and when not to worry. In a sense, it is this constant interplay of considerations of abstractness and efficiency that makes programming such a fascinating activity. 
I need to emphasize the last paragraph of Stepanov note because I have just encountered a not so "miserable" failure very closely related to what Stepanov said in that paragraph. I need to cleanup some left-over code which supposed to provide abstraction for some sort of file system operation in two very different OSes. Unfortunately, the previous code failed "quite" miserably to provide good abstraction on the task, precisely because it wasn't designed from the ground-up on both OSes as Stepanov suggested. It was only designed from the ground-up to work well in one of them. Therefore, the design lean more to one of them. Fortunately, not all hope is lost because I think the task could still be salvaged through several iteration to fix the abstraction. I said "quite" miserably because the state of the matter could still be salvaged/fixed somehow. It's not a total disaster. I hope this is a good food for thought for C++ programmers out there.

Tuesday, September 27, 2016

What are 0xDEADBEEF, 0xFEEEFEEE, 0xCAFEFEED & co. ?

If you stumbled in this post looking looking for detailed answer for any of those mentioned in the title, without further ado, there are more complete explanation at:

But, if you want to know the big picture, read on ;-)

Chances are, you stumbled here after doing some hardcore debugging and found yourself baffled at the values that showed-up in the CPU registers or in the heap/stack memory. I found the first two values in the title (0xDEADBEEF and a variant of the second, i.e. 0xFEEEFEEEFEEEFEEE) while doing debugging in two different systems. The 0xDEADBEEF was on a System i (Power 5) system and the second one was on a Windows 64-bit machine.

All of these values are debugging-aid value, so to speak. It makes them very visible in the debugger (for those who already know). The purpose is to signal that something went wrong and to give an idea what possibly wrong, i.e. where the error possibly comes from, just with a glance on the debugger. For example, 0xDEADBEEF could mean either the program accessed unitialized (heap?) memory or a NULL pointer is encountered (pointing to uninitialized memory). Anyhow, it means something is wrong with one of your pointer. Similar case is indicated by 0xFEEEFEEE or its 64-bit variant.

These "readable" hexadecimal values are categorized as hexspeak because it looks like a "language" despite being hexadecimal value, i.e. you can read them aloud in English or other intended human language. The most hilarious of them all is 0xB16B00B5 ("Big Boobs"). I wonder who was the Hyper-V project manager at the time this Linux guest signature was determined at Microsoft LoL.

Tuesday, September 6, 2016

Debugging Cross-Compiled Windows Application (Executable and DLL)

I explained how to cross compile Windows application and DLL in Arch Linux in another post. Now, let's proceed on techniques that you can use to debug the result of the cross compilation. The general steps are as follows:

  1. Test the cross-compilation result in Wine (running on Linux of course). If the executable can run in Wine or the DLL can be loaded and (at least) partially executed, then, you may proceed to the next step. Otherwise, double check your cross-compiler as it may emit the wrong kind of executable.
  2. Run the executable (and if required all the DLLs) in Windows. First, without a debugger and then within a debugger, should an anomaly (or more) is found during the run(s).
  3. In the event that you need a debugger, make sure that the cross compiled version of the code contains debugging symbols. You can use "-g" switch in gcc/g++ to generate the debugging symbol in your GNU cross compiler. 
  4. In the event that you need a debugger, make sure your Windows debugger is recent enough that it can parse the debugging symbols in your cross-compiled executables and/or DLLs. Also, make sure that it can handle local variable(s), missing local variable debugging support or inability to display function parameter value(s) indicates that your debugger version probably isn't compatible with the cross-compiler. This is particularly true for gcc/g++ and gdb combination. For gcc/g++ cross compiler, you can use gdb from the nuwen "distribution". It has very recent GDB version. Note: I was caught off-guard by older version of gdb in Windows before because it was still quite usable.
To validate that your gdb version, make sure that your debugger output is similar to this:
Valid GDB output
As you can see in the screenshot above, you can inspect all local variable(s) while inside a breakpoint in a function that clearly has local variable. The debugger also shows the value(s) of the parameter passed to the function (where you set the breakpoint), including the function's implicit this parameter.  If you can't see any of that, it means you are using gdb which is incompatible with the gcc/g++ cross-compiler used to create the executable/DLL. Try finding newer gdb version than the one you're currently using.

You can use gdb "script" to carry-out semiautomatic debugging. The screenshot above shows how to use a gdb script, i.e. by using the source command in gdb. The source command basically tell gdb to parse the command file, i.e. the debugging script as if you're typing the debugging command yourself in gdb. See: for more info on using command file in gdb. This is the gdb command file used in the screenshot above:

Hopefully, this post is helpful for those cross compiling applications to Windows from Linux.

Wednesday, August 17, 2016

Cross Compiling Windows Application and DLLs in (Arch) Linux

Cross compiling 32-bit and 64-bit Windows application in Linux is much easier these days than in the past. Thanks to the Mingw-w64 project.  It's even a little more easier in Arch Linux because most of what you need--including extensive amount of libraries--are already in AUR. For starter, install the cross compiler: Then you can continue to install all other stuff (libraries and their dependencies) that you need. In most cases, you can just build and install the package by using the PKGBUILD file from AUR directly (via: cd ${src_dir}; makepkg -sri ). However, in some cases, you need to make adjustment(s) to the PKGBUILD file.

Let's focus on mingw-w64 in Arch Linux. There are several important matters that you need to take care of to cross compile opensource projects that uses Cmake in Arch Linux to build Windows executables and DLLs:
  • Opensource projects that uses CMAKE build system, need to use the mingw-w64-specific cmake (look at the example PKGBUILD for cmake below).
  • You need to set the include path to the cross compiler toolchain environment include path, not the host include path. 
This is an example PKGBUILD file for a simple Helloworld application that uses boost. It assumes that you have build and install the cross compiled boost DLL in your Arch Linux mingw-w64 environment.

_architectures="x86_64-w64-mingw32 i686-w64-mingw32"

rm -rvf build-*

for _arch in ${_architectures}; do
 mkdir -p build-${_arch} && pushd build-${_arch}
   ${_arch}-cmake ..
 make VERBOSE=1
The example above is the PKGBUILD file for the sample Helloworld project. You can clone the project over at:

There are also some things to take care if you cross compile opensource projects that uses autotools in Arch Linux to build Windows executables and DLLs:
  • Opensource projects that uses autotools build system, need to use the mingw-w64-specific configure script (look at the example PKGBUILD for configure below).
  • In some cases, you need to "fool" the libtool script to pass the "dynamic/static library integrity" check. You don't need to be afraid with this step because you could always use Linux file utility to verify the compiler output along with wine before testing/using it in real Windows installation.
This is an example PKGBUILD file for popt library:
# Maintainer: Sebastian Morr 
# Modified by Pinczakko for Mingw-w64 cross compilation to 64-bit Windows

pkgdesc="A commandline option parser (mingw-w64)"
options=(!strip !buildflags staticlibs)

_architectures="i686-w64-mingw32 x86_64-w64-mingw32"

prepare() {
  cd "$srcdir/${_pkgname}-$pkgver"
  patch -p1 -i ../0001-nl_langinfo.mingw32.patch
  patch -p1 -i ../197416.all.patch
  patch -p1 -i ../217602.all.patch
  patch -p1 -i ../278402-manpage.all.patch
  patch -p1 -i ../318833.all.patch
  patch -p1 -i ../356669.all.patch
  patch -p1 -i ../367153-manpage.all.patch
  patch -p1 -i ../get-w32-console-maxcols.mingw32.patch
  patch -p1 -i ../no-uid-stuff-on.mingw32.patch

build() {
  # We assume that libtool check on 64-bit Windows DLL is broken
  # in mingw-w64 Linux cross compiler. So, force it to pass all checks
  export lt_cv_deplibs_check_method='pass_all'

  cd "$srcdir/${_pkgname}-$pkgver"
  for _arch in ${_architectures}; do
    mkdir -p build-${_arch} && pushd build-${_arch}
 ${_arch}-configure --enable-shared --enable-static 

package () {
  for _arch in ${_architectures}; do
    cd "${srcdir}/${_pkgname}-${pkgver}/build-${_arch}"
    make install DESTDIR="${pkgdir}"
    rm -rf "${pkgdir}/usr/${_arch}/share/man"
    ${_arch}-strip -x -g "${pkgdir}/usr/${_arch}/bin/"*.dll
    ${_arch}-strip -g "${pkgdir}/usr/${_arch}/lib/"*.a

  install -D -m644 "${srcdir}/${_pkgname}-${pkgver}/COPYING" "$pkgdir/usr/share/licenses/$pkgname/LICENSE"

You can clone the files required to "cross build" popt library at:

Hopefully, this is useful for those developing Windows application in Linux.