CCL- Carnival – the Market likes the results

Cruise ship company CCL (Carnival) released its Q1 results, sending the shares up 7.2% to 3241p. It reported:

We are experiencing an ongoing improvement in underlying fundamentals based on our successful initiatives to drive demand. Our efforts to further elevate our guest experience are clearly resonating with consumers and, notably, improving the frequency and retention of our loyal guests.

CCL had a Stockopedia Momentum rank in the 90’s until recently, where it has slipped to 87. It is possible that when they refresh their statistics, it will be in the 90’s again. It has a 6 month relative strength of 27%, which puts it at least in the top quintile. It has a Piotroski score of 8.

I bought CCL as a momentum recovery play back in January, and so far, it has been OK as a performer. Not spectacular, but nothing like the disasterous money I have poured down the drain on miners.

Back in June 2014, Greenblatt said that he was shorting CCL, because it wasn’t cheap, and was earning a low return on capital and investing even more money; the very opposite of a “good and cheap company”, in fact. Unfortunately, taking a short position at that point would have been a disaster. The shares were at 2200p at the end of June, and are now at 3241p.

This is, I believe, highlights a real problem with Magic Formula investing. Companies with high PEs and low returns on capital may (but not necessarily, of course) be at their nadir, and ripe for a recovery. Cyclical companies live in a topsy-turvy world. Cyclical companies tend to have low PEs and high ROEs at the peak of the cycle, and high PEs and low ROEs at the bottom of the cycle. Investors who buy cyclicals “because they are cheap” often end up regretting it. As Lynch said: buying cyclicals after several years of record earnings and are on low PEs is a proven method for losing half your money in short order. I think Greenblatt made a bit of a schoolboy error on his CCL short.

I had noticed that the operating margins for CCL were amongst their lowest. CCL’s Value Rank is 33, which does not on the face of it make it compellingly cheap. A combination of a mediocre value score and high momentum is exactly the kind of thing that a value investor wouldn’t likely touch. Value investors tend to be hard-wired to resist buying momentum.

However, I reasoned that CCL appears to be in recovery, so I could expect margins to improve. The recovery is based on both qualitative and quantitative aspects: the narrative that the company is pushing out, and the high Piotroski score of 8. The PE of 16 might then not appear to be so expensive as first supposed, especially as the PEG ratio is 0.52.

CCL also qualifies for Stockopedia’s Value Momentum screen. They describe it as “impatient value”, where the stocks have a high momentum and a low PEG. It is not a low PE “value”.

I actually think there’s a secret wisdom behind that strategy, and I would like to explore Momentum investing further in my Fantasy Fund.

I actually think a simple strategy of buying a stock whose Momentum score is in the 90’s and whose Piotroski score is at least 8, and then selling when the Momentum score dips below 80 would actually be quite a good strategy. Not foolproof, but quite good.

I could be speaking too soon about CCL, but their latest RNS certainly seems to point towards continuing recovery. Is the recovery fully priced in? We shall see, but the positive reaction to today’s announcement is encouraging.

Happy investing to you all.

Posted in Uncategorized | Leave a comment

Haskell: collating two lists

I’m working on my rewrite of my accounting program from C into Haskell. I’m tackling the mentally challenging problem of aggregating two lists together.

What I have is a list of equities and a list of equity transactions. I want to create a number of portfolio views, where each portfolio may depend on the type of equity it is, and to what account it belongs.

I first started off by trying to group the transactions by equities, and basically spent some time writing my own groupBy, only to discover that Haskell already provides it. If you aggregate the transactions to a group of equities, you can then perform a reduction on that group by, saying totalling the number of shares in that account, then you have something useful.

Except, I realised that after all that work I did, it wasn’t what I was looking for. Yo may want to aggregate lists together in ways that don’t allow for a simple heirarchy. It is with this in mind that I wrote a collation function.

First, though, I wrote a function called ‘finding':

finding p lst =
  foldl f (Nothing, []) lst
  where
    f (res, acc) el =
    if ( (Nothing == res) && (p el) )
    then (Just el, acc) else (res, acc ++ [el])

testFinding = finding (== 10) [12, 13, 10, 14, 10, 15]
-- => (Just 10,[12,13,14,10,15])

‘finding’ is a cross between ‘first’ and ‘partition’. ‘First’ finds an element in the list, but then discards the list, whilst ‘partition’ returns a tuple  of all matching criteria, and non-matching criteria. What I wanted was a tuple of the first matching element in a list, and the remaining list.

Now that I have that convenience function, I can write my ‘collate’ function proper. It needs to be split into two special cases: one where the left list contains precisely one element, and another where it contains multiple elements. Here is the function for the one-element list:

collate p  (l:[]) rs =
  [(Just l, hit)] ++ misses
  where
    (hit, miss) = finding (p l) rs
    misses = map (\m -> (Nothing, Just m))  miss

I have something on the left, so I match it, if possible, with something on the right using a predicate p. I may have elements on the right left over, which implies that I have exhausted the list on the left. So I construct remaining tuples consisting of (Nothing, Just m) for the non-matching rights. Now, it may be that an element that doesn’t match on the left can be ignored, or it could signify an error. The function doesn’t care. That’s for the caller to decide.

Now we need another case for collate, where the list on the left has more than one element:

collate p (l:ls) rs =
  [(Just l, hit)] ++ (collate p ls misses)
  where
    (hit, misses) = finding (p l) rs

This is a recursive call, where we combine the head of the list with anything we’ve found on the right of the list. We then concatenate that with the remainer of the list on the left with the unmatched list on the right.

Here is an example call:

testCollate2  = collate (\l r -> l == r) [10, 11] [12, 13, 11]

This function collates a list of left integers [10, 11] with a list of right integers [12, 13, 11]. The test for ‘sameness’ is simple numerical equals. In practise, the list on the left, and the list on the right will be some kind of structured data, and the predicate would define some kind of relevant key matching. Here is the result of evaluating the function:

 [(Just 10,Nothing),
  (Just 11,Just 11),
  (Nothing,Just 12),
  (Nothing,Just 13)]

You see that the 10 on the left doesn’t match on the right. The 11 matches fine. The 12 and the 13 didn’t match on the left.

Phew.

Posted in haskell | Leave a comment

Collatz conjecture

Whilst fooling around with Haskell, I came across the Collatz Conjecture …

The Collatz function f is defined by
f(n) = n/2 if n is even
f(n) = 2n+1 if n is odd

n is any positive integer. The conjecture is that for any starting value n0, a repeated application of f will reduce to 1. i.e. given n0 there exists a j such that f^j(n0) =1.

I thought it would be fun to experiment with it, but I don’t think it’s worthwhile me spending much time on it, as it has been unsolved for a long time. It is therefore the province of krank mathematicians, like squaring the circle.

But I thought I’d chip in my morning coffee thoughts on the problem …

It is certainly true that if f^j(n0) = 2^a for some a, then f^(j+a)(n0) = 1. It seems that all sequences must conform to 2^a eventually, although this is unproven. What I did notice is that if you have a number of the form: F1: 2 ^(2m) + 2^(2m-1) + … + 1 for any m,
then the next number in the sequence has the form 2^a. I haven’t proved that, yet, though.

Now, given any number n, it is trivial to establish that it has the form: F2: n = 2^m0 + 2^m1 + … 2^mj
for some j such that 0 < m1 < … < mj

We can then, in effect, write any number in as a binary vector: F3: b = [b0, b1, …, bj]
where bi is either 0 or 1 for 0<=i<=j

When b0 =0, it is true that
f(b) = [b1, …, bj]
So the interesting cases are when b0 = 1.

What the next questions to answer would be as to how bit patterns propagate under the odd case, and whether there is some metric on the length of the vector that determines its “compexity”. Let me call that metric, the “Hamming length”, for a want of a better phrase. Let me denote it by H.

I’m thinking out loud here, but suppose we have a number of the form, m = 2^a .
Then H(m) = 1, perhaps. In other words, although the number is large, it is effectively reducable to the number 1. Now let’s take another number of the form: m = 2^a +1.
This time H(m) = a, because you would need quite a lot of vector space to write the number. Now, m is odd. Note now that f(m) = 2^(a+1) + 2^a + 4 = 4*(1+ 2^(a-2) + 2^(a-1))
aha. Now in this case H(f(m)) = a-1. So the Hamming length is reduced by an application of f.

*IF* we could convince ourself that the Hamming length is always reduced (and I’m not sure that it is), then we are effectively on the home straight, because any number for which H(m) = 1 will be trivially reducable.

Anyway, just some ideas, it’s prolly all nonsense.

Update 22-Mar-2015: Oh, the vanity. After further mucking around, I discovered that what I’ve said is almost entirely nonsense. Looking at the binary numbers that are generated are interesting, though. What seems to happen is that the Hamming length does get longer, but it also tends to propagate 0’s towards the least significant bits. This then leads to making big reductions. So I think cycles can be arbitrarily long. Each application of f can make the Hamming length longer. So maybe  it’s a case of working out an upper bound on the Hamming length before a dequence of repeated applications, and then seeing how that compares to the Hamming length when you get a string of 0s. Is the Hamming length necessarily shorter? I wonder what the bit pattern for a “maximal cycle” for integers less than 2^a would look like. Are they even unique? No doubt thousands of thousands of people and real mathematicians have been working on this problem, so perhaps it is unlikely in the extreme that a few methods of attacks that have come to my mind haven’t been thought of before. Hey ho.

Posted in Uncategorized | Leave a comment

Eclipse over Aberdeen

Pictures were taken at Bridge of Don, Aberdeen, Scotland, on Fri 20 March 2015 at approximately 9.30 am. Pictures were taken by Sylwia Mroz, and are copyright by her. Reproduced here with permission.

Sylwia is a work colleague of mine. I wanted to show pictures which were more personal than those you would see in the papers. I was worried that the clouds would spoil the photography opportunities, but if anything, they’ve enhanced them. I really like the spectral glow that it produces. Click on the images to enlarge.

DSC_0440

DSC_0443

DSC_0451

Posted in Art | Tagged , | Leave a comment

Haskell: use the assign operator (<-) for great victory

I was wondering how to escape Haskell’s “monad jail”. Now I’ve figured it out.

I am writing my little accounting package in Haskell. I have “pure” functions which work on regular lists and suchlike. I have written a little parser that reads the file for data. These pure functions are incompatable with the IO. So I ended up stuffing lots of code into the main function.

But there’s no need to do that. The fix is below.

Suppose I have a “regular” function which does all the complicated, time-consuming process. Let’s call that function ‘process’. For the sake of simplicity, suppose the “complicated” processing looks like this:

import Data.Char

process :: [Char] -> (Int, [Char])
process x =
(length x, map toUpper x)

‘process’ takes a string, and returns a tuple of its length, and upper-case version. It is just for illustrative purposes, you understand.

Now, suppose we want that string to have been read in from stdin. I could define a function:

foo = do
inp <- getLine
print (process inp)

I type ‘foo’, and I am prompted for some input. So I type in some input and press return. The crucial line is: inp <- getLine
which ASSIGNS my input to inp. inp is of type String – plain ol’ regular String. That’s great, because I can now pass that to ‘process’ in the regular way.

Now, what if instead of a simple ‘getLine’, the input is a big expensive operation? Maybe it has to read several big files. If I am working in the Prelude, what I don’t want to do is:

foo = do
input <- bigExpensiveRead
print (process inp)

because if I tweak my process a little and want to experiment, I have to keep doing that bigExpensiveRead. But fear not. What I should do is Prelude> input <- bigExpensiveRead
Now, if I tweak any part of my ‘process’ I can just redo the processing part without having to do the inputting part: Prelude> process input

This will, of course, be completely obvious to seasoned, and even not-so-seasoned, Haskell programmers. But to newbies, it is one of those minor epiphanies. Previously, input and output seemed like an overwhelming obstacle. It’s probably why people new to Haskell are put off, and don’t pursue it any further.

No-one can be told what Haskell is. You have to see it for yourself.

Posted in Uncategorized | Leave a comment

zeromq: an easy-to use IPC mechanism

Zeromq is a library that makes IPC as simple as reading and writing messages. No multithreading is required, as zermoq handles all the fuss of queuing and dispatching messages.

IPC (Inter Process Communication) is something I have been interested in for awhile, but found it very difficult to write usable code. One weird idea I had was to make an ncurses server. The idea is that you would open a terminal, start and start this ncurses server. You could then send commands to that server, and it would move the cursor about and display text that it was told to. Anybody would be able to send it a message, so it wouldn’t be intended to be tied to one process.

It turns out that zeromq is an ideal candidate for this, and I found out about it when I started digging into IPython, and found out that it uses it as a communication transport mechanism. Zeromq is easy to use, as I’ll illustrate in the example below.

I’m going to build a little server and client. The purpose of the zero is to return a number. Every time you call the server, it will give you back the next number in the sequence.

Here’s the server:

#include <zmq.h>
#include <string.h>
#include <stdio.h>
#include <unistd.h>
#include <assert.h>

int count = 0;

int main (void)
{
  // Socket to talk to clients
  void *context = zmq_ctx_new ();
  void *responder = zmq_socket (context, ZMQ_REP);
  int rc = zmq_bind (responder, "tcp://*:5555");
  assert (rc == 0);

  while (1) {
    char buffer [20];
    zmq_recv (responder, buffer, 20, 0);
    printf ("Received Hello\n");
    sleep (1); // Do some 'work'
    sprintf(buffer, "Answer: %d\n", count);
    zmq_send (responder, buffer, 20, 0);
    count++;
  }
  return 0;
}

You compile it using a command like:
gcc -o server server.c -lzmq

You will need to install the zeromq dev library, of course. It is likely to be available on most distributions. Ubuntu users are, of course, amply covered. ‘count’ holds the sequential value, which is incremented after sending each response. Notice how conceptually simple the code is. It’s really no more complicated than reading or writing to a file.

Now let’s write the client:

#include <zmq.h>
#include <string.h>
#include <stdio.h>
#include <unistd.h>


int main (void)
{
  printf ("Connecting to hello world server…\n");
  void *context = zmq_ctx_new ();
  void *requester = zmq_socket (context, ZMQ_REQ);
  zmq_connect (requester, "tcp://localhost:5555");

  printf ("Sending Hello \n");
  zmq_send (requester, "Hello", 5, 0);
  zmq_recv (requester, buffer, 20, 0);
  printf ("Received World %s\n", buffer);

  zmq_close (requester);
  zmq_ctx_destroy (context);
  return 0;
}

Again, this is very straightforward. You can see that we’re using port 5555 as communication port. Compile it: gcc -o client client.c -lzmq

Start the server: ./server &

Send a message to this server by typing: ./client
You get back the response:
Connecting to hello world server…
Sending Hello
Received Hello
Received World Answer: 0

Run it again: ./client
The response this time is:
Connecting to hello world server…
Sending Hello
Received Hello
Received World Answer: 1

And so on, and so forth.

Very neat. Zeromq has bindings for Python, as you’d expect, but there are bindings for many other languages, including Haskell, Lua, Java, Perl, and so on.

Posted in Computers | Tagged , , , , | Leave a comment

Haskell: the pleasure of bondage and discipline

Like Emacs vs Vim, static versus dynamic typing is apt to spark bloody holy wars in a brutal attempts to determine who has the kindest and most compassionate God.

My Go To language is Python, and one of the many things to like about it is its dynamism. If it looks like a duck, and quacks like a duck, then we can can cook it like a duck. Mmm, that’s tasty duck.

I am a newbie to Haskell, so I am making a lot of faltering first steps. Whenever I get a piece of code to work, I tend to attribute it less to brilliant programming on my part, and more to the providence of Fortuna. There’s a big, unmoving, unyielding block in Haskell, and that big, unmoving, unyielding block is called the IO Monad. It seems to exist purely to frustrate programmers.

I have yet to really understand how to interpret a function’s type signature … and yet … and yet … it is dawning on me that this is one of the jewels in the crown of Haskell. You see, what is the essence of programming? Its essence is this: the transformation of one data structure into another. I have data in one form, but I want data in another form.

Those type signatures tell you what types go in, and what types go out. As you increase the specificity of the type signature, you loose generality in what it can do, but you are solidifying exactly the nature of the transformation.

Now, even C has type signatures, and in fact requires it so that it can layout memory correctly. So it’s not as if Haskell is the first language to come up with the idea of type signatures. However, Haskell goes way beyond mere memory layout. It completely affects the way you think about programming. Traditionally, programmers think about procedures. With Haskell, you tend to look at you job as creating type transformations, and assembling them in the appropriate order. Documentation can be sparse, because the type signatures can tell you pretty much all you need to know.

What these signatures do for you is provide well-defined interfaces which are “impossible” to use incorrectly, in the sense that the compie will refuse to allow you to operate on incompatable types. As one programmer noted on reddit (http://is.gd/oi4zze):

This is what I like to call “type tetris.” It’s fun, and you end up with correct programs without even having to understand what you just assembled.

Posted in Uncategorized | Leave a comment