TerraStation NAS 5400D Firmware Customization

Overview

I have purchased second-hand Buffalo Terrastation NAS 5400D last year. It costed me some hundred bucks but it is still far cheaper than the brand new one. I just configured it for a home nas and I was really happy with its performance. However, the firmware is locked and I do not able to install any custom application or I was too busy at that time to dig into it. Now is the time. While working at home to isolate from nCOVID-19, I have an opportunity to play with it.

That is interesting that there is a new firmware from Buffalo to upgrade too so I can take this opportunity to upgrade new firmware and customize it. The new firmware is

VERSION=4.06-0.02
KERNEL=2020/02/04 18:07:14
INITRD=2020/02/04 18:39:47
ROOTFS=2020/02/04 18:39:38
NO_FILE_BOOT = 1
FILE_KERNEL = uImage.img
FILE_INITRD = initrd.img
FILE_ROOTFS = hddrootfs.img 

That firmware is released for TS5000 and TS4000 series. So let play with it. Here are list of stuffs:

  1. Upgrade new firmware -> just easy stuff, extract it and run the TSUpdate.exe tool which will download firmware to target and update.
  2. Enable SSH service to login
  3. Install aMule tool

Customizing firmware

Hardware configuration:

  • CPU: Intel Atom D2550 @ 1.86 GHz, 2 cores 4 threads (support hyperthreading). The GPU is PowerVR. Codename is cedartrail. More information about CPU is cedartrail CPU. Intel also released the Yocto SDK
  • RAM: DDR3 2GB, 64bit, 1066MHz
  • 4 bays HDD

The stock firmeware is linux based and using Grub 1.9, linux kernel 3.10.69-atom_usi.

The root password is randomly generated so we need a work-around solution. First, we need to extract the firmware and get needed stuff. The firmware is password protected, however, the information is leaked here.

Unzip the firmware file and locate the directory containing LSUpdater.exe or TSUpdater.exe locate the correct initrd.img and uImage.img for your device (usually there is only one, if there are more they typically include the corresponding device model in the name).
These files are actually password protected zip files, the password will be one of the following:

1NIf_2yUOlRDpYZUVNqboRpMBoZwT4PzoUvOPUp6l
aAhvlM1Yp7_2VSm6BhgkmTOrCN1JyE0C5Q6cB3oBB
YvSInIQopeipx66t_DCdfEvfP47qeVPhNhAuSYmA4
IeY8omJwGlGkIbJm2FH_MV4fLsXE8ieu0gNYwE6Ty

Enable root shell on serial port

The content of firmware update package includes:.

.
└── ts5000_4000-v406
├── img
│ └── firm_confirm.png
├── TS5000_4000-v406
│ ├── hddrootfs.img
│ ├── initrd.img
│ ├── linkstation_version.ini
│ ├── linkstation_version.txt
│ ├── TSUpdater.exe
│ ├── TSUpdater.ini
│ └── uImage.img
└── TS5000_4000-v4.06.html

Using zcat to decompress the initrd.img to enable the root console without prompt us the password:

$ cd ts5000_4000-v406/TS5000_4000-v406
$ unzip initrd.img
//using password: YvSInIQopeipx66t_DCdfEvfP47qeVPhNhAuSYmA4 to unzip
password incorrect--reenter:
inflating: initrd-atom_d510.buffalo
inflating: initrd-atom_usi.buffalo
$ zcat initrd-atom_usi.buffalo > initrd-atom_usi.buffalo.uncompress
$ mkdir initrd
$ sudo mount -o loop initrd-atom_usi.buffalo.uncompress ./initrd

We need to update initrd /usr/local/bin/buffalo_consoled.sh. We want to force the firmware always open the shell to start root console without asking password

diff --git a/initrd/usr/local/bin/buffalo_consoled.sh b/initrd/usr/local/bin/buffalo_consoled.sh
index 8e9418e..32127a0 100755
--- a/initrd/usr/local/bin/buffalo_consoled.sh
+++ b/initrd/usr/local/bin/buffalo_consoled.sh
@@ -49,9 +49,9 @@ Daemon()
        while [ 1 ] 
        do
-               [ "${ENV_FORCE}" != "yes" ] && CheckRequirement
+               #[ "${ENV_FORCE}" != "yes" ] && CheckRequirement
                LogPrintOut "Starting getty on ttyS0."
-               /sbin/getty -L ttyS0 115200 vt100 &
+               /sbin/getty -n -l /bin/bash -L ttyS0 115200 vt100 &
                PID_CHILD=$!
                wait ${PID_CHILD}
        done

So we skip the CheckRequirement and execute getty without asking password and execute bash immediately! DONE, just finish and repack the zip file then execute update.

$ sudo umount ./initrd
$ gzip -9 initrd-atom_usi.buffalo.uncompress
$ mv initrd-atom_usi.buffalo.uncompress.gz initrd-atom_usi.buffalo
$ rm -f initrd.img
$ 
$ zip -e initrd.img initrd-atom_usi.buffalo initrd-atom_d510.buffalo
Enter password: 
Verify password: 
  adding: initrd-atom_usi.buffalo (deflated 1%)
  adding: initrd-atom_d510.buffalo (deflated 1%)

There you go, flashing new firmware to the NAS and we will have root console at com port. The com port baudrate is 115200n8

Enable SSH

The SSH service is enabled if you enable sftp but the you will not able to login via ssh because Buffalo change the code base to drop connection! We need to use another sshd server. There is many ways to work-around this: using a pre-built ssh from your PC using chroot to execute it or rebuilt the sshd by your self.I build a new sshd and copy it to NAS. So, we need to build a toolchain. The toolchain used by Buffalo is:

  • GCC 4.4.6
  • glibc 2.14.1
  • crosstool-NG 1.13.2 – 01122012
bash-3.2# /lib/libc.so.6
GNU C Library stable release version 2.14.1, by Roland McGrath et al.
Copyright (C) 2011 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.
There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A
PARTICULAR PURPOSE.
Compiled by GNU CC version 4.4.6.
Compiled on a Linux 2.6.39 system on 2012-01-13.
Available extensions:
crypt add-on version 2.1 by Michael Glad and others
GNU Libidn by Simon Josefsson
Native POSIX Threads Library by Ulrich Drepper et al
BIND-8.2.3-T5B
libc ABIs: UNIQUE IFUNC
For bug reporting instructions, please see:
<http://www.gnu.org/software/libc/bugs.html>.

Here you go, I use docker ubuntu12.04 and crosstool-ng 1.20 to build the toolchain. Here is the configuration

$ ct-ng menuconfig

Select glibc 2.14.1

We select gcc version 4.4.6

Enable C/C++

Select companion libraries:

  • GMP 4.3.1
  • MPFR 2.4.2
  • PPL 0.10.2
  • CLooG 0.15.11
  • libelf 0.8.13

We must enable Check the companion libraries builds so that ct-ng will try to run library checking for us. Other combinations may fail as I have tried and come with this configuration.

Optional: compile other tools and libraries but you can skip it.

About 15 minutes to run and compile the toolchain on my machine (it is quite strong with 64GB RAM + Ryzen 3 3800x) but yours maybe faster or slower.Here is my docker file

# NOTE: This docker image must be created with selinux disabled:
#
# sudo setenforce Permissive
# docker build -t yoctobuild/precise:2 -f Dockerfile.precise .
# sudo setenforce Enforcing
#
# Additionally, for non-root podman images, the fuse package
# must be held.


FROM ubuntu:precise
MAINTAINER Quan Cao <caoducquan@gmail.com>
ENV DEBIAN_FRONTEND noninteractive
WORKDIR /tmp


# Need i386 arch and update repos
RUN apt-get update -qq && \
apt-get upgrade -y -qq


# Need default shell to be bash
ENV LANGUAGE en_US.UTF-8
ENV LC_ALL en_US.UTF-8
ENV LANG en_US.UTF-8


RUN ln -sf /bin/bash /bin/sh


RUN apt-get -qq update \
&& apt-get -qq install locales apt-utils python-software-properties \
&& DEBIAN_FRONTEND=noninteractive apt-get install -y locales \
&& locale-gen en_US.UTF-8 && dpkg-reconfigure locales && /usr/sbin/update-locale LANG=en_US.UTF-8 \
&& apt-get install -y -qq \
binutils \
dos2unix \
file \
libxml2-utils \
m4 \
make \
rsync \
zip \
libssl-dev \
gawk wget git-core diffstat unzip texinfo gcc-multilib \
build-essential chrpath socat cpio python python3 \
xz-utils debianutils iputils-ping libsdl1.2-dev xterm autoconf libtool automake


# i386
RUN apt-get install -y -qq \
libc6:i386 \
libstdc++6:i386 \
zlib1g:i386 \
build-essential \
perl


RUN apt-get install -y -qq bison flex libtool libncurses-dev curl wget gperf libcrypto++-dev


RUN useradd -ms /bin/bash yoctobuild
USER yoctobuild
WORKDIR /home/yoctobuild

Then you can have toolchain to compile openssh. Just compile and put it to NAS at /usr/local/sbin/sshd.strict and update the /etc/init.d/sshd.sh

SSHD_EXT=/usr/local/sbin/sshd.strict

sshd_stop()
{
        killall sshd
        killall sshd.strict //added
}
sshd_start()
{
.....
        
        ${SSHD_EXT} -f /etc/sshd_config.nopam
}

Install aMule

$ cd aMule-2.3.2
$ mkdir build.noguid
$ cd build.noguid
$ ../configure --enable-amule-daemon --enable-amulecmd --enable-webserver --disable-amule-gui --disable-monolithic --prefix=/
$ make -j8
$ 
$ ldd src/amuled
        linux-vdso.so.1 (0x00007ffc001ff000)
        libz.so.1 => /lib/x86_64-linux-gnu/libz.so.1 (0x00007efbfecee000)
        libcrypto++.so.6 => /usr/lib/x86_64-linux-gnu/libcrypto++.so.6 (0x00007efbfe772000)
        libwx_baseu_net-3.0.so.0 => /usr/lib/x86_64-linux-gnu/libwx_baseu_net-3.0.so.0 (0x00007efbfe52e000)
        libwx_baseu-3.0.so.0 => /usr/lib/x86_64-linux-gnu/libwx_baseu-3.0.so.0 (0x00007efbfe09f000)
        libstdc++.so.6 => /usr/lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007efbfdd16000)
        libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007efbfd978000)
        libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007efbfd760000)
        libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007efbfd541000)
        libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007efbfd150000)
        libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007efbfcf4c000)
        /lib64/ld-linux-x86-64.so.2 (0x00007efbff4d7000)
 
$ ldd src/webserver/src/amuleweb
        linux-vdso.so.1 (0x00007ffc001ff000)
        libwx_baseu_net-3.0.so.0 => /usr/lib/x86_64-linux-gnu/libwx_baseu_net-3.0.so.0 (0x00007efbfefa7000)
        libwx_baseu-3.0.so.0 => /usr/lib/x86_64-linux-gnu/libwx_baseu-3.0.so.0 (0x00007efbfeb18000)
        libz.so.1 => /lib/x86_64-linux-gnu/libz.so.1 (0x00007efbfe8fb000)
        libpng16.so.16 => /usr/lib/x86_64-linux-gnu/libpng16.so.16 (0x00007efbfe6c9000)
        libstdc++.so.6 => /usr/lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007efbfe340000)
        libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007efbfdfa2000)
        libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007efbfdd8a000)
        libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007efbfdb6b000)
        libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007efbfd77a000)
        libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007efbfd576000)
        /lib64/ld-linux-x86-64.so.2 (0x00007efbff4d7000)

$ tree aMule
aMule
├── bin
│   ├── amuled
│   ├── amuled.sh
│   ├── amuleweb
│   ├── amuleweb.sh
│   ├── bash
│   └── ls
├── dev
│   └── tty
├── home
│   ├── aMule
│   │   ├── amule.conf
│   │   ├── amule.conf.bak
│   │   ├── Incoming
│   │   └── Temp
│   └── amule.conf
├── lib
│   ├── ld-linux-x86-64.so.2
│   └── x86_64-linux-gnu
│       ├── libcrypto++.so.6
│       ├── libc.so.6
│       ├── libdl.so.2
│       ├── libgcc_s.so.1
│       ├── libm.so.6
│       ├── libpng16.so.16
│       ├── libpthread.so.0
│       ├── libstdc++.so.6
│       ├── libtinfo.so.5
│       ├── libwx_baseu-3.0.so.0
│       ├── libwx_baseu_net-3.0.so.0
│       └── libz.so.1
├── lib64 -> lib
├── root
├── share
│   ├── amule
│   │   └── webserver
│   ├── applications
│   ├── doc
│   │   └── amule
│   ├── locale
├── usr
│   └── local
│       └── share -> ../../share
└── var
    └── log
        ├── amuled.log
        └── amuleweb.log

Create a script to start amuled. Please note that, starting amuled will also start amuleweb for us.

bash-3.2# cat /etc/init.d/amuled.sh 
#!/bin/sh
PATH=/bin:/usr/bin:/sbin:/usr/sbin:/usr/local/bin/:/usr/local/sbin/

CHROOTDIR=/mnt/array1/worknas/jail/aMule
CMD="/bin/amuled -c /home/aMule"
LOGFILE="var/log/amuled.log"
SRVNAME=amuled
KILLSIG=2

cd $CHROOTDIR

service_start()
{
    pid=`pidof ${SRVNAME}`
    echo "$pid is running for ${SRVNAME}"
    if [[ -n "$pid" ]]; then
        echo "Service is running! No need to start!"
        exit 0
    fi
    echo "Starting service ${SRVNAME} ..."
        cd $CHROOTDIR
        /usr/sbin/chroot ${CHROOTDIR} ${CMD}  > ${LOGFILE} 2>&1 &
        echo $! > /var/run/${SRVNAME}.pid
}

service_stop()
{
        #Sending SIGINT for gracefull stop!
    killall -${KILLSIG} ${SRVNAME}
}


case $1 in
start)  
        service_start
        ;;
stop)   
        service_stop
        ;;
restart)
        service_stop
        sleep 10
        service_start
        ;;
reload) 
        service_stop
        sleep 10
        service_start
        ;;
*)
        echo "Unknown argument"
        ;;

Then we need to execute this script when starting/shutdown NAS. For starting, hook the script to /etc/init.d/rcS

/etc/init.d/wol.sh start wol_ready_check

echo "****** START AMULE **************"
/bin/mount --bind /dev /mnt/array1/worknas/jail/aMule/dev
/etc/init.d/amuled.sh start
exec_sh consoled.sh

For shutdown, creating a symbolic link

$ cd /etc/rc.d/extensions.d/
$ ln -s ../../init.d/amuled.sh K20_amuled.sh

Here you go, the amule daemon will start/stop as your epxectation. Enjoy!

Posted in Uncategorized | Leave a comment

Kubernetes From Network Developer Perspective

Overview Kubernetes

Kubernetes is an prominent container orchestration. In Kubernetes cluster, the master node will orchest tons of worker nodes to gear system to desired requirement. The user/devloper will input their requirements to system via manifest files (YAML format) to declare the expected state (1 load balancer, 4 replicas of auth services, 4 replicas of wordpress services …) then Kubernetes will automate all steps to drive all systems to meet that goal.

Kubernetes Network Communcation

The backbone to leverage the performace of Kubernetes are:

  1. Service Discovery (Data Plan Bus)
  2. System monitoring (Control Plan Bus)

In above figure, we can assume that pods have different color are different services. Those services must communicate with each others via Data Plan Bus. How to keep these communication fast and secure are what we are working on. That brings us to another layer named service mesh (Istio).

We can think of it as an upper layer on top of TCP/IP in OS to make sure communicate between service-to-service are accuracy. Why? Some pods maybe up and down while running so the message maybe get lost due to pods are stopped. Should we resend that message after pods are recovered? How can we say that the communication is fast enough? We need metrics, right? We need to track down all the communcation time and can get detailed information to analyse the issue, where is the bottleneck? And service mesh should provide us all the tools to do that. As of now, Istio and linkerd are active developing to support these features. Also, many commercial solutions are comming.

Posted in Kubernetes | Leave a comment

Raw Socket vs NetFilter

Overview

I recently had the stuff to work with TCP/IP stack. My small project is bridging all IP packets from modem PPPTP connection to PPPOE connection with PC. For this, I develop a bridging program on userspace and open a raw socket to obtain all IP packets and PPPOE session packets from the modem and br-lan port. However, we need to allow remote access to the device from the internet over the modem connection. So I have to come up with issue if netfilter framework will filter packets before sending to raw socket?

No, the raw socket will capture all packets before they come to IP stack. The netfilter is part of IP stack so that the raw socket can capture all packets before they are filtered by netfilter/firewall. (similar to wireshark or tcpdump capture packets)

Sample code

TBD

Posted in Uncategorized | Leave a comment

Open Source IPSec

It took me many years to secure my time to write down a blog. Recently, one of my friends asked me to help him resolve an IPsec issue with Fortigate Device. In fact, I do not have much experience with this so I reluctant to help. I just want to list down all knowledge I learned when investigating this issue.

Agenda

RFC standards

  • Group of standard
  • Relationship

ISAKMP

  • Packet format
  • How to parse packets
  • IKEv1 protocol and message exchange
  • IKEv2 protocol and message exchange (T.B.D)

Strongswan Implementation

  • Strongswan architecture
  • Strongswan implementation for IKEv1
  • Strongswan implementation for IKEv2
  • Talking with Linux Kernel IPsec Framework XFRM
  • How to debug

RFC standards

In general, IP security is one of the complicated protocol suites and went through many updated version. What I focused on is that site-to-site IPsec VPN. In this mode, ESP tunneling is selected so that I will focus on these standards:

  • RFC4301 provides a Security Architecture for Internet Protocol
  • RFC2407
  • RFC2408 Internet Security Association and Key Management Protocol (ISAKMP)
  • RFC4306 The Internet Key Exchange v2 (Obsolete rfc2409 IKEv1). In fact, the newest one is RFC7296 published in October 2014
  • RFC4303 Encapsulation Security Protocol
  • RFC3948 UDP Encapsulation of IPsec ESP Packets.

In general, the site to site IPsec VPN require these protocol

  • The internet key exchange (version 1 or version 2): which help to set up the security associations (SA) between parties such as negotiate the security parameters (DH group), authenticate peers (pre-shared key), auto rekeying …
  • Encryption protocol (ESP or AH)

The following picture described how normal IP packets are encoded to IPsec packets.

This assumed that UDP encapsulation (RFC3948) is used so that we have UDP Header field between New IP Hdr and ESP Hdr. This is applied whenever any sites of IPsec tunnel connection is behind the NAT which is mostly the case. If it is not the case, UDP Hdr is stripped off.

SOME NOTES ABOUT SA

  • We must differentiate between IKE SA vs IPsec SA.
  • In general, IKE SA is created by IKEv1 or IKEv2. Based on that, IPsec SAs are created using the secure connection created by IKE.
  • In Strongswan, it called these IPsec SAs as Child SA.

ISAKMP

ISAKMP is a protocol to allow IPsec peers to exchange and negotiate the security parameters. ISAKMP is split into two phases: phase 1 and phase 2. At phase 1, two ISAKMP peers establish a secure, authenticated channel to communicate which is called ISAKMP SA. At phase 2, IPsec SA is negotiated and established.

Here is the format of ISAKMP frame. The header has the following format:

 

Each payload is composed as follows

Phase 1

Main Mode

 

Aggressive Mode

T.B.D

Phase 2

When a security association (SA) is initially established, one side assumes the role of initiator and the other the role of the responder. Once the SA is established, both the original initiator and responder can initiate a phase 2 negotiation with the peer entity. Thus, ISAKMP SAs are bidirectional in nature.

 

TODO: What is ESP Header, ESP Auth?

TODO: How new IP Header is formed?

Both IKE packets and IPsec packets use ESP protocol to encode the data. The difference is that IKE packets have ESP marker equal to 00 00 00 00.

Strongswan

In general, Strongwan is written in C language with OOP in mind. The public header files will have a structure including public APIs only. The private structure will include the public structure which declares the method and hides all internal data member in the private structure. For example, the public identification_t structure is a public interface with public methods.

struct identification_t {
/**
* Get the encoding of this id, to send over
* the network.
*
* Result points to internal data, do not free.
*
* @return a chunk containing the encoded bytes
*/
chunk_t (*get_encoding) (identification_t *this);

/**
* Get the type of this identification.
*
* @return id_type_t
*/
id_type_t (*get_type) (identification_t *this);
...
}

The private implementation is as following

struct private_identification_t {
/**
* Public interface.
*/
identification_t public;

/**
* Encoded representation of this ID.
*/
chunk_t encoded;

/**
* Type of this ID.
*/
id_type_t type;
};

The following diagram depicts some small parts of Strongswan.

TODO1: How the egress packets are processed

TODO2: How the ingress packets are processed

 

 

Posted in Uncategorized | Tagged | Leave a comment

AndroidManifest.xml “coreApp” attribute

I am playing with AOSP to build my own android. What I want to do is make my own IME as the only IME on system. Everything is fine till encryption is invoked.

Well, after encrypting, the system reboots and asks  me to provide PIN for Data partition decryption. It is strange that I could not input my PIN code due to the fact that there is no Soft Keyboard pop up!!! I have to plug my hardware keyboard with OTG cable to pass this phase.

After a long journey, I found that Android Framework just load needed applications called core application. What make an application is core app ? There is an attribute defined in AndroidManifest.xml named “coreApp” and it should be set to true in order to load my own IME app. It seems that the feature has been available from Honeycomb (3.0).

Posted in Uncategorized | Leave a comment

Linux and Glibc

Share vs static

We know that glibc is designed for dynamic link (dso) because share libs are very efficient about the size (code space is share btw many files), flexible and easy to upgrade. These features are good for PC, general purpose environment but not for embedded system. Embedded system needs small footprint, fast and stable so static link is a good choice which system does not need to resolve symbol like dynamic. All code is link together to a big binary image the compiler has choice to optimize code with direct jump vs indirect jump in share lib and we do not face the problem with mis-match version of library.

Cross-compile glibc

Come back to glibc, fortunately that glibc still has a option to compile add-on libs as static. This is nss lib (name service switch). I took many days to google how to make my image static link with glibc support functions as gethostbyname, gethostbyaddr, getaddinfo … There is no clear answer but I got some hints. After downloading glibc source code, running configure tool then I found that glibc support building static lib for nss so we must cross-compile a new glibc because binary glibc image from DENX does not do that.

I am so busy on these days to continue this post but now I found the time to update this.

Back to nss-switch static link library: Cross-compile glibc is not a touch job but there is something you need to be cared. GLIBC use autoconf tool and some checks such as sizeof(unsigned long long) … does not success on host platform so you need to set these value on cache file. One more thing is about TLS (thread local storage). We must build glibc with TLS supported.

Talking about TLS: TLS as its name meaning, is a “private” storage for each thread. It is not really a private because it could be access from other threads if they know the address. The private means that the compiler will generate code to create that storage dynamically whenever you create a new thread by calling pthread_create(). If you do not call pthread_create() then you lost this feature. Following is the code using keyword __thread to declare a variable is a thread private:

static __thread int g_private_val;

void * thread_handler(void * args)

{

printf(“address of private_val = %p \n”, &g_thread_val);

return(NULL);

}

void main(int argc, char *argv[])

{

pthread thread1, thread2;

pthread_create(&thread1, thread_handler, NULL, NULL);

pthread_create(&thread2, thread_handler, NULL, NULL);

pthread_join(thread1);

pthread_join(thread2);

return(0);

}

Look at the result; we will notice that the address for each thread will be difference. Removing keyword __thread, both thread will print the same address.

The TLS is architecture dependence.  On PowerPC architecture, they allocate this storage on stack. GPR2 (general purpose register #2) points to address of private data structure (ABI standard). That data structure will store information for TLS.

I am working on a project that TLS is a good solution. My job is building platform layer. This layer must support API for creating tasks. Like thread, tasks are sharing address space, file system name space but they do not share signal handler, file descriptor. Why tasks need these features? If they sharing signal handler, when a task got SIGSEGV, all other tasks will be killed too so sharing signal handler is not a good idea. My system can’t stand with that solution. Sharing file descriptor also has some problem because we use many shell libraries which need access to standard in, standard out files.  Fortunate that I could satisfy these requirement thanks by clone system call. We could ask kernel to share what we need (for example: CLONE_VM: share address space, CLONE_FILE: share file descriptor). That looks good as we could share what we want. However, a new problem occurs when I integrate other packages to my system such as Quagga and NetSnmp. These packages support for real-process (sharing nothing). Our tasks are btw thread and separated process. The problem with real-process is that global variables are private for each process but that is not true for thread. And you could guest that is the place for TLS.  With TLS, my job becomes so easy that I only add keyword __thread to global variables so they are private for each thread. So, the final solution is that we use pthread_create() so that glibc will build the task context for us (base on stack). From that context, we use clone system call to build our own tasks. Until now, I satisfy with that solution :D.

Posted in Uncategorized | 1 Comment