Friday, March 16, 2012

Jim Cramer is way behind - but glad he is behind the same stock which we recommended weeks ago via Twitter (and personally bought a few days back):

http://wallstcheatsheet.com/investing/jim-cramer-tells-investors-buy-these-5-big-name-stocks-now.html/

Follow @paranumeral on Twitter and keep an eye for (significant) positive predictions. Do some research and if you like the company go get it well before others crunch the numbers.

Monday, January 30, 2012

PgBench PostgreSQL 9.1 on RAID10 EBS volumes vs single plain EBS volume

Install the repository package (configures yum for PGDG repo):


wget http://yum.postgresql.org/9.1/redhat/rhel-6-x86_64/pgdg-centos91-9.1-4.noarch.rpm

rpm -Uvh pgdg-centos91-9.1-4.noarch.rpm


Install server, clients and contrib modules (this last one provides pgbench etc):


yum install postgresql91-server.x86_64

yum install postgresql91.x86_64

yum install postgresql91-contrib-9.1.2-1PGDG.rhel6.x86_64


Make it come up on reboot:


chkconfig postgresql-9.1 on


Create one server on each of the two mount points - /ebs1 is the mount point of the single plain EBS volume and /ebs2 is the mount point of the RAID10 volume of the 4 plain EBS volumes:


mkdir -p /ebs1/pgsql/data

mkdir -p /ebs2/pgsql/data


chown postgres /ebs1/pgsql/data

chown postgres /ebs2/pgsql/data


su postgres


/usr/pgsql-9.1/bin/initdb -D /ebs1/pgsql/data

/usr/pgsql-9.1/bin/initdb -D /ebs2/pgsql/data


Start the server on the plain EBS volume:


/usr/pgsql-9.1/bin/pg_ctl -D /ebs1/pgsql/data/ start


Initialize pgbench (~15Gb of data):


/usr/pgsql-9.1/bin/pgbench -i -s 1000 postgres


Stress it for 5 minutes:


/usr/pgsql-9.1/bin/pgbench -j 4 -c 100 -M prepared -T 300


starting vacuum...end.

transaction type: TPC-B (sort of)

scaling factor: 1000

query mode: prepared

number of clients: 100

number of threads: 4

duration: 300 s

number of transactions actually processed: 69748

tps = 232.166776 (including connections establishing)

tps = 232.441306 (excluding connections establishing)


Stop the server on plain EBS volume and start the one on the RAID10 volume:


/usr/pgsql-9.1/bin/pg_ctl -D /ebs1/pgsql/data/ stop

/usr/pgsql-9.1/bin/pg_ctl -D /ebs2/pgsql/data/ start


Initialize:


/usr/pgsql-9.1/bin/pgbench -i -s 1000 postgres


And stress for 5 min:


/usr/pgsql-9.1/bin/pgbench -j 4 -c 100 -M prepared -T 300


starting vacuum...end.

transaction type: TPC-B (sort of)

scaling factor: 1000

query mode: prepared

number of clients: 100

number of threads: 4

duration: 300 s

number of transactions actually processed: 94884

tps = 315.728824 (including connections establishing)

tps = 316.180605 (excluding connections establishing)


/usr/pgsql-9.1/bin/pg_ctl -D /ebs2/pgsql/data/ stop


And now with a 1000 concurrent users:


# change /ebs1/pgsql/data/postgresql.conf

# max concurrent connections from default of 100 to 1000

nano /ebs1/pgsql/data/postgresql.conf


/usr/pgsql-9.1/bin/pg_ctl -D /ebs2/pgsql/data/ stop

/usr/pgsql-9.1/bin/pg_ctl -D /ebs1/pgsql/data/ start


/usr/pgsql-9.1/bin/pgbench -j 4 -c 1000 -M prepared -T 300


starting vacuum...end.

transaction type: TPC-B (sort of)

scaling factor: 1000

query mode: prepared

number of clients: 1000

number of threads: 4

duration: 300 s

number of transactions actually processed: 32450

tps = 106.620361 (including connections establishing)

tps = 108.312732 (excluding connections establishing)


/usr/pgsql-9.1/bin/pg_ctl -D /ebs1/pgsql/data/ stop


# change max_connections 1000

nano /ebs2/pgsql/data/postgresql.conf


/usr/pgsql-9.1/bin/pg_ctl -D /ebs2/pgsql/data/ start


/usr/pgsql-9.1/bin/pgbench -j 4 -c 1000 -M prepared -T 300


starting vacuum...end.

transaction type: TPC-B (sort of)

scaling factor: 1000

query mode: prepared

number of clients: 1000

number of threads: 4

duration: 300 s

number of transactions actually processed: 55574

tps = 182.883653 (including connections establishing)

tps = 185.534468 (excluding connections establishing)


At almost double the TPS the RAID10 configuration is definitely worth the effort.

Monday, January 16, 2012

CentOS 6.2 64bit AWS EC2 AMI creation step-by-step

This guy couldn't find any instructions on how to create a CentOS 6 AMI but then fails to provide necessary details in his howto.
""I used PV-Grub, which boots into a mini Xen OS first, using one of the Amazon supplied kernels. It then boots the kernel inside the AMI, which happens to be the native kernel supplied with CentOS 6.""
So here are my steps - in as much detail as I recorded - maybe I need to add a bit more explanation to some of the steps so one does not follow them blindly:
  1. Start an EC2 instance, preferably a Redhat/CentOS 6, as it already has the tools needed.
    • used CentOS ami-697bae00
    • used defaults for kernel (aki-8e5ea7e7), ram disk etc
  2. Create a 5GB EBS and attach it to the running instance
    • vol-xxxxxxx (note volume id)
    • /dev/sdf
  3. Format (labeling it) and mount the partition, creating a few skeleton directories
    • cat /proc/partitions (to verify what dev our volume was attached to .. newer/XEN kernels rename them)
    • parted /dev/xvdj
      • mklabel msdos
      • mkpart primary ext4 1 -1
      • set 1 boot on
      • quit
    • mkfs.ext4 /dev/xvdj1
    • mkdir /mnt/ami
    • mount /dev/xvdj1 /mnt/ami
    • mkdir -p /mnt/ami/{dev,etc,proc,sys}
  4. Create base devices for the new install
    • /sbin/MAKEDEV -v -d /mnt/ami/dev -x console
    • /sbin/MAKEDEV -v -d /mnt/ami/dev -x null
    • /sbin/MAKEDEV -v -d /mnt/ami/dev -x zero
  5. Create fstab for the new install
    • cp /etc/fstab /mnt/ami/etc/fstab
    • mount -t proc proc /mnt/ami/proc
    • mount -t sysfs sysfs /mnt/ami/sys
  6. Install YUM
    • mkdir -p /mnt/ami/var/lib/rpm
    • rpm --rebuilddb --root=/mnt/ami
    • rpm --import --root=/mnt/ami http://mirror.centos.org/centos/RPM-GPG-KEY-CentOS-6
    • wget http://mirror.centos.org/centos/6.2/os/x86_64/Packages/centos-release-6-2.el6.centos.7.x86_64.rpm
    • rpm -i --root=/mnt/ami --nodeps centos-release-6-2.el6.centos.7.x86_64.rpm
    • yum --installroot=/mnt/ami install -y rpm-build yum
  7. mkdir /mnt/ami/root
  8. cp -v /mnt/ami/etc/skel/.??* /mnt/ami/root
  9. mount --bind /proc /mnt/ami/proc
  10. mount --bind /dev /mnt/ami/dev
  11. cp /etc/sysconfig/network /mnt/ami/etc/sysconfig/network
  12. cp -v /etc/resolv.conf /mnt/ami/etc/resolv.conf
  13. chroot /mnt/ami/ su -
  14. cp /etc/fstab /etc/mtab
  15. yum clean all
  16. yum groupinstall base
  17. yum groupinstall core
  18. exit
  19. cp -v /etc/sysconfig/network-scripts/ifcfg-eth0 /mnt/ami/etc/sysconfig/network-scripts/ifcfg-eth0
  20. Enable fetching of assigned ssh keypair from instance user data at boot time
    • cp /etc/rc.local /mnt/ami/etc/rc.local
  21. Disable DNS checks and allow root to log into SSH (as in original AMI we started from)
    • cp /etc/ssh/sshd_config /mnt/ami/etc/ssh/sshd_config
  22. [?? STILL TRUE ??] Disable (as it screws up with Xen) selinux in /mnt/ami/etc/selinux/config
    • cp /etc/selinux/config /mnt/ami/etc/selinux/config
  23. Configure GRUB "ghost" to satisfy PV-GRUB (menu.lst link to grub.conf pointing to kernel & initramfs)
    • \rm -r /mnt/ami/boot/grub
    • cp -r /boot/grub /mnt/ami/boot/grub/
  24. Update initramfs
    • chroot /mnt/ami/ su -
    • cd /boot
    • mv initramfs-2.6.32-220.2.1.el6.x86_64.img orig_initramfs-2.6.32-220.2.1.el6.x86_64.img
    • mkinitrd --force initramfs-2.6.32-220.2.1.el6.x86_64.img 2.6.32-220.2.1.el6.x86_64
  25. With the AMI completed, time to sync and unmount the drive (verify unmounts via "cat /etc/mtab")
    • sync
    • umount /mnt/ami/dev
    • umount /mnt/ami/proc
    • umount /mnt/ami/sys
    • umount /mnt/ami
  26. Create a snapshot of the EBS volume
  27. Create an AMI from snapshot (root/block device MUST be /dev/sda NOT default of /dev/sda1)
    • kernel: aki-8e5ea7e7
    • arch: x86_64
    • root: /dev/sda

Friday, June 17, 2011

Avoid penny stocks and those with recent splits

Currently trailing DJIA by ~150%. The top contributors to the negative performance vs DJIA are ORS with 100% and WEST with 45%. ORS is as wild a story as it gets (up as much as 200% earlier) and makes for good reading material. WEST on the other hand is due to splits - see separate posting on splits. WEST split 1:4 on 4/14/11 causing the unadjusted price in the weeks prior to the prediction to go from ~0.5 to ~2.0 which the system sees as unsupported by fundamentals and expecting a sharp reversal hence the negative prediction.

Friday, May 20, 2011

Performance running total

Added a running total of the % return over market (see TotDiff% column). Since the history is in reverse chronological order the top entry shows the current total (as of yesterday's closing). So paranumeral is trailing the DJIA by 3%. The largest contributor (after trimming history to just the latest model's entries - deployed 5/12/11) is CXW with -10.6%. Turns out Pershing Square Capital dumped 3.5M shares on the 16th. Hopefully CXW will correct over time.

Thursday, May 19, 2011

Splits

Noticed the following

5/6/2011 11:44:37 PM hlf 0.6459572 14152.50 104.84 14159.80 53.29 0.05 -49.17 -49.22

and realized that it is due to Herbalife doing a 2 for 1 split on May 18th 2011. Paranumeral fetches just the price deltas (as in just yesterday's) so as to minimize traffic but it seems we have a problem with splits. So I'll either do the adjustment (complex) or fetch all historical prices to ensure they're adjusted for splits. In the meantime this affects total performance very directly and significantly - splits are huge changes in price! Furthermore this issue plagues model building by adding lots of noise. So I have my work cut out.

Monday, May 16, 2011

Performance Tracking

Over the last few days I added performance tracking facilities to the website. One can view own or system prediction performance history (currently latest 20). There's much to improve but it is a good start. One thing to keep in mind is that the latest prices are from the previous business day. Today's predictions are not shown for that reason.