3G (1) 8600GT (1) AI (4) API (1) apple (2) apple mail (1) atlassian (1) bambo (1) Bamboo (1) bloat (1) boost (1) bugbear (1) C++ (5) calling conventions (1) cdecl (1) CI (1) compiler (1) continuous integration (1) coursera (1) custom domain (1) debugging (1) deltanine (1) diagnosis (1) diy (5) DLL (1) dns (1) education (1) electronics (1) express checkout (1) fail (2) fink (1) Google App Engine (3) hackintosh (1) Haskell (3) homebrew (2) icloud (2) ipad2 (1) jobhunting (1) libjpeg (1) linux (1) mac (2) mbcs (1) mechanic (1) memory (1) MFC (3) Microsoft (1) migration (1) ML (1) mobile (1) movi (1) MSBuild (1) music (1) naked domain (1) NLP (2) o2 sensor (1) obd (1) Optiplex960 (1) osx (1) outlook express (1) payments (1) paypal (1) photos (2) PIL (1) Project Euler (1) python (2) raspberrypi (3) soundcloud (1) stdcall (1) stripe (1) subaru (2) supermemo (1) supermemo anki java (1) sync (2) Telstra (1) tests (1) thunderbird (1) udacity (1) unicode (1) Uniform Cost Search (1) university (1) upgrade (1) vodafail (1) vodafone (1) VS2010 (1) vs2013 (1) VS6.0 (1) weather (1) Win32 (1)

Monday, 28 September 2015

No wonder PayPal is having it's lunch eaten by Stripe...

I am currently trying to integrate paypal express checkout with an organic fruit and vegetable online shop that I manage the website for.

Wow! What a flakey experience, I can only hope that the PayPal production site is more stable than its sandbox!

First up, when I tried to create test accounts in the sandbox I kept getting mysterious "system errors". Only after much googling I discovered that this mysterious system error means that the password for the test account does not meet the password complexity rules... (hint: Password1! meets the criteria).

Then there are the random, intemittent failures when I call the express checkout apis.

And now, I find that paypal gives me an nginx 404 error when I try to launch the integrated express checkout page: After a while it comes good...

This just smells of a half baked test sandbox environment. No wonder upstarts like Braintree (now bought by PayPal) and Stripe are eating paypals lunch in the payments market.

In fact, I was thinking of using Braintree integration, but there signup process was a bit onerous, asking me questions about our shops delivery and returns policy (we don't have one - people pick up their groceries each week), privacy policy, etc.

Anyway, PayPal needs to step up its game as far as developers are concerned... when you have so many issues in its test environment you have to wonder how stable the production environment is.

Tuesday, 8 September 2015

My iCloud photos library finally synced sucessfully - no thanks to Apple!

In my previous post I mentioned how iCloud photos sync killed my MacBook. It was stuck on 90 Gb of my 180 Gb for 3 months, and when I fixed the issue it did the remaining 90 Gigs in about 10 days (on optus cable 1.5 Mbs upload speed).

Basically, my problem was that a couple of photos and videos in my library were corrupted, and iCloud photo sync is rubbish and keeps getting stuck on the corrupted files, instead of just skipping them (like rsync would). But you would never know this as it doesn't give any meaningful feedback on how the sync is progressing.

I first observed the issue using the system activity montor and seeing what files and handles the cloudd process had open. I could see it repeatedly looping over the same set of photos, over and over again. When I tried opening each of these photos in the finder, one of them caused preview to crash. I deleted this file and the sync got as little bit further before gettign stuck again.

I then tried repairing my Photos libaray by holding command + option while launching Photos.

This obviously did something as it managed to upload a few more gigs after that before getting stuck again.
Then my macbook died due to the stress of running icloud photo sync for 3 ½ months, so I took out the HDD and plugged it into my spare windows box and installed mac os 10.10 in VMWare to continue the sync, but after a while I would get a file IO eror and VM ware would crash.

I thought it might have been a problem with the physical disk, so I ran Disk Utility, and lo and behold it found a few errors and repaired them, but VM ware would still crash after a while when accessing my photos library. I then decided to copy my photos library to a virtual disk file and see if I had any more luck.

When using the finder to drag and drop the photos library to the virtual disk file, it would copy 90 Gigs (suspiciously close to the size of the library it has managed to upload before getting stuckl) and then crash.
I then began to suspect it was a problem with my photos library, so I went to the terminal and used cp -R -v to copy my photos library. After copying 90 gigs, it failed again, but this time I could see which file inside my library caused it to fail. I tried opening the photo in the finder, but it just caused the finder to hang, so I deleted the photo, which happened to be from the same event as the corrupt photo I deleted previously, and tried to copy it again, this time using rsync -v -a instead.

rsync is much more robust, and instead of failing it skips the files then tells you at the end which files failed- the iCloud developers could learn a thing or two from rsync! In this case, it was a photo and a video, both from the same event as the previous photos I had deleted. I deleted these files and ran rsync again, and it finished super quickly as it had managed to sync all of the files in the previous run.

I set . my newly copied photos library as my system photos library, and then tuned iCloud sync back on which continued to upload with no further dramas. It may have helped that the VM was running on a quad core Q6600 processor, and not a core 2 Duo laptop becuase the icloud photo sync uses your computers CPU to convert all your photos and videos to lower resolutions, which was the cause of the thermal stress which killed my trusty 2008 MacBook Pro.

If you are having trouble syncing your photos library to icloud, try repairing your photos library by holding down command + option when launching Photos. If that doesn't work, try copying your photos library using rysnc to see if there are any corrupt files which could be causing the problem and delete them. It also doesn't hurt to use a computer with a lot of grunt and Apple makes it work really hard converting all your photos and videos to lower resolutions.

Good luck!

Thursday, 6 August 2015

Apple iCloud killed my trusty MacBook

Well, after a three and a half month valiant struggle to upload my 12000 photos and 1000 videos- (180 GB of data in total) to iCloud, my trusty 2008 MacBook Pro has finally died, succumbing to the video chip glitch that a lot of machines of this generation suffered from. Apparently this was due to the switch to lead free solder in the manufacturing process, and can often be fixed by reflowing the solder by heating the logic board in the oven at 200 degrees C for 10 minutes. But that will have to be for another blog post...

It nearly got there, it managed to upload about 100 GB of data, until it got stuck.... according to my calculations 180 GB at 1024 MB/sec should have only taken about 17 days to upload 24/7 - not over 3 1/2 months!

The main problem is that instead of uploading everything to the cloud, and doing the video and photo thumbnail generation in the cloud, apple uses YOUR CPU time to generate 2 versions of every photo and video in your collection at lower resolution so they can be viewed on other devices, which pegs your CPU at 100% and puts a lot of thermal strain on your computer, not to mention making it impossible to use for anything else.

When it actually gets around to uploading  the files to the cloud, it maxes out your upload which kills your ADSL download speed. It was so bad that 2 weeks in to this sorry saga I decided to upgrade to cable internet.

Then, if it encounters a corrupt photo and video in your collection, it will just stall without telling you it is stuck. But you would never know, because Apple don't give you any useful progress indicators as to how the iCloud sync is progressing- the only way really to tell is that the CPU and bandwidth usage isn't getting red-lined.

I  managed to figure this out by using the Activity monitor to see what files the cloudd process had open, and noticed that it kept looping over the same set of files. When I tried to open each of these files using finder, I found one photo that crashed preview every time. I deleted this file along with all the version on other devices and my upload progressed from being stuck at 94 GB until it got stuck again at 100 GB.

You can see all the temporary files that are used by cloudd when syncing by opening the Photos library file in the finder with "Show package contents". When icloud photo sync is enabled, there will be a subdirectory in there called private,  which contains some sqlite databases that are used to track the file sync and a whole lot of directories named AAA, AAB, AAC, AAD, etc which contain the files that cloudd seems to be uploading in batches to the cloud.

There are also 2 directories for the lower resolution videos and photos that are used by the VideoConversionService and PhotoConversionService whilst they abuse your CPU and turn your laptop into a nice little room heater. Again, you can identify which files are being converted by viewing the VideoConversionService and PhotoConversionService in the activity monitor and going to the open files and ports section.

Another top tip, which got my upload briefly unstuck for a while it to repair your photo library by launching Photos while holding down the alt option keys.

I thought I would be able to consolidate all my photos in the cloud from all my devices. I think I ended up deleting the same photo from my library about 20 times trying to clean up my camera roll, but whether I deleted it from my phone, my computer or the icloud web interface it would never get deleted from my other devices.

basically iCloud photo sync DOES NOT WORK. I'm convinced it is all a big scam to sell iCloud storage plans, that they will never have to worry about actually being used.

basically, iCloud photo sync sucks.

Friday, 19 June 2015

Atlassion Bamboo Deployment plans are half baked...

I recently noticed that Bamboo now has deployment plans for deploying build artefacts. I had already  implemented deployment as another stage in my build plan, so I thought I would try my hand at making a deployment plan.

My deployment plan is a very simple windows script that remotely installs an msi on a target machine.

But guess what?

Deployment plans (unlike build plans) have no means to specify any necessary requirements of the bamboo agents that will run the plan. In this case, it is a windows script and NEEDS TO RUN ON A WINDOWS BAMBOO AGENT!

So my deployment plan runs on any available agent, including linux agents, where it fails completely.

Deployment plans are useless if you have both windows and linux build agents. You can dedicate an agent to deployment, but then you can't use it to build anymore.

Massive fail, Atlassian!

I found a workaround: include a MSBUILD job in your deployment plan, which will force it to run only on windows machines. For the solution file, choose any random name and pass /help as argument to msbuild, which prevents it from trying to open the bogus solution file.

Thursday, 12 February 2015

My new raspberry pi 2 model b arrived!

I really should write up what I did with my other raspberry pi in my other post, like I promised...

Basically, I interfaced it to my security intercom in my apartment via a relay so I could unlock the security door with my iphone, because I lost my security key and my strata wanted to charge me $150 for a replacement.

It has since been decomissioned as I have moved house(which meant I had to buy a new key from the strata anyway... but since the strata management had changed it was only $75), but I will write it up when I get around to it (could be a while as I have noticed that I haven't posted on this blog for a looong time!)

It is currently being repurposed as a time machine for backing up my macs. Not sure what I will do with this baby - its got 1gb of ram and a quad corm arm processor. It would be nice to hook up to my stereo to stream music via airplay or squeezebox... it probably overkill for that purpose!

Thursday, 24 October 2013

Visual Studio 2013 and the arrogance of Microsoft

I have just installed the recently released Microsoft Visual Studio 2013 and the most striking thing about it seems to be the sheer arrogance of Microsoft and contempt for its customers.

It needs Windows 7 or greater to install which is fair enough - even  though my corporate desktop is still Windows XP (not for long as it goes out of support in April 2014).

No, the ridiculous thing is that VS 2013 requires you to install IE 10 before it can be installed. That's right - you have to upgrade your browser to IE10 before you can install the IDE!

This is a complete dealbreaker for many corporate environments.

Fortunately here's a workaround windows command script. VS2013 seems to work just fine with ie8 so it seems to be a marketing driven decision only.


REG ADD "HKLM\SOFTWARE\Wow6432Node\Microsoft\Internet Explorer" /v Version /t REG_SZ /d "9.10.9200.16384" /f 
REG ADD "HKLM\SOFTWARE\Wow6432Node\Microsoft\Internet Explorer" /v svcVersion /t REG_SZ /d "10.0.9200.16384" /f 
REG ADD "HKLM\SOFTWARE\Microsoft\Internet Explorer" /v Version /t REG_SZ /d "9.10.9200.16384" /f 
REG ADD "HKLM\SOFTWARE\Microsoft\Internet Explorer" /v svcVersion /t REG_SZ /d "10.0.9200.16384" /f 

REG DELETE "HKLM\SOFTWARE\Wow6432Node\Microsoft\Internet Explorer" /v svcVersion 
REG DELETE "HKLM\SOFTWARE\Microsoft\Internet Explorer" /v svcVersion 
REG ADD "HKLM\SOFTWARE\Wow6432Node\Microsoft\Internet Explorer" /v Version /t REG_SZ /d "8.0.7601.17514" /f 
REG ADD "HKLM\SOFTWARE\Microsoft\Internet Explorer" /v Version /t REG_SZ /d "8.0.7601.17514" /f 


 The other act of corporate arrogance is that they have deprecated non-unicode MFC applications, with the lame excuse that the 64 Mb MFC MBCS libraries would bloat their 5.7 Gb distribution of VS2013. You can still build MBCS applications but the libraries need to be downloaded separately from here.

Porting a legacy app from mbcs to Unicode is a non-trivial task... especially if there is direct pointer manipulation that assumes 8 bit chars sprinkled through the code base. This would be a whole lot of pain for zero gain.

Friday, 18 October 2013

More than I ever wanted to know about Win32 calling conventions...

Just spent the last day trying to debug a weird bug - one of those works in Debug builds but crashes in Release builds issues in a C++ MFC application. After tweaking some compiler settings I discovered it wouldn't crash if I disabled optimizations in release mode, but rather than doing that I thought I would dig down to find the root cause.
After much setting of breakpoints and attaching the debugger I found it was crashing in a call to CString::Format() - but only subsequent to a call to a function in 3rd party DLL that was used for encryption. The origins of this DLL were long ago lost in the mists of time so unfortunately there was no documentation.
The function was called by dynamically loading the DLL and calling ::GetProcAddress() to obtain the function pointer, which had the following signature:

typedef long ( __stdcall* LPFNDLLENCRYPT)(const char *, const char *, char *);

is used to call most of the Win32 apis. This calling convention can generate smaller code, as the called function performs the stack cleanup code.

__cdecl calling convention relies on the caller to cleanup the stack frame, and since functions are typically called from more than one place in the code, the additional stack cleanup code bloats the code.

On a hunch, I changed the function signature to:

typedef long ( __cdecl* LPFNDLLENCRYPT)(const char *, const char *, char *);

and lo and behold - it fixed the bug!

It seems the DLL was using __cdecl calling convention and expected the caller to cleanup the stack frame, but since the function pointer incorrectly declared it as __stdcall the compiler was not generating the stack cleanup code. This left the stack in a corrupted state which caused the next call to the CString::Format function in the MFC DLL to fail. Presumably the Debug build was more resilient with respect to stack corruption.

Here's a really useful article on calling conventions on CodeProject which explains calling conventions in more detail than you would ever want to know.... unless your debugging weird bugs due to calling conventions!

I love C++!