Monday, January 14, 2019

mscgen on macOS

mscgen is a handy tool for making message sequence charts. I've used this on links before, but using it on my macOS device required some hoops to jump through. Here's how I got this to work!

Download mscgen source package and un-tar to your favorite location

# d="~/work/mscgen"; mkdir $d; pushd $d
# curl http://www.mcternan.me.uk/mscgen/software/mscgen-src-0.20.tar.gz | tar -xvz
# pushd mscgen-0.20 
# vi README
# ./configure
Here's where the fun begins with dependencies. On my machine, I needed to install libgd, libpng. these dependencies are reported in the "missing" file

# vi missing

The following steps got me what's needed. I used sudo make install for both these to simplify my life. there's always a risk of doing this wrong and messing up your machine, so please be careful. Also ensure that the download is secure (TLS/https) and that the hash matches the published.

# d="~/work/libpng"; mkdir $d; pushd $d
# curl https://download.sourceforge.net/libpng/libpng-1.6.35.tar.gz | tar -xvzf libpng-1.6.35.tar
# pushd libpng-1.6.35
# ./configure
# make
# sudo make install

# d="~/work/libgd"; mkdir $d; pushd $d
# curl https://github.com/libgd/libgd/releases/download/gd-2.2.5/libgd-2.2.5.tar.gz | tar -xvz
# pushd libgd-2.2.5
# ./configure --with-png=/usr/local/
# make
# sudo make install

# pushd ../../mscgen/mscgen-0.20/
# ./configure
# make

At this point, if alls gone well, there should be a macOS binary mscgen under the src directory.

# file .//src/mscgen

Create a test file and check output

# ./mscgen -i test.msc -o test.png -T png
# open test.png

test.png

Wednesday, November 23, 2016

Getting my Lenovo desktop to enter BIOS screen deterministically

I recently bought a Lenovo IdeaCentre 300s Desktop Intel Core i5 8GB Memory 1TB Hard Drive BlackGray and was having trouble getting it to dual boot. On the face of it, getting into the BIOS was supposed to be easy: the onscreen prompt just recommended hitting ENTER to disrupt the normal boot process. However, my attempts at repeatedly hitting ENTER only produced frustration. I found that despite my hitting ENTER, the system proceeded to boot to Windows 10 directly without any change. I began to suspect the USB keyboard that came along with my Lenovo. When I tried another keyboard, I had short-lived success. It seemed to work the first time, but never again. I suspected that the front USB ports were behaving differently from the rear ones. Again, I had a few successes, but nothing consistent.

It was a while before I figured out that a "cold start" seemed to always work while a "restart" seemed to fail. This lead me to look into the BIOS (a.k.a. Unified Extensible Firmware Interface or "UEFI" these days). Sure enough, I found a couple of entries that seemed to make sense: SecureBoot and QuickBoot. Disable these both and boom hitting ENTER on the power on screen gets you in to BIOS each time.

Note, this really isn't all that much different in terms of boot speeds, so the ability to choose where to boot from, including USB (and not just boot partition which GRUB can handle) is worth it.

OBLIGATORY WARNING: MONKEYING AROUND WITH YOUR BIOS SETTINGS CAN LEAVE YOU WITH A VOID WARRANTY, A VOID SYSTEM, OR BOTH.

Tuesday, September 6, 2016

What are "HKLM" in various Windows help group solutions

There are many Windows behaviors which are governed by modifying specific system internals, e.g. programs that launch once per system startup, or every log in, or for particular users, etc. These typically require changes to the Windows registry (regedit is your frenemy).

When particular keys are mentioned in several online fora, particular registry entry keys are often contracted to arcane acronyms such as "HKLM", "HKCU" etc.


Microsoft has a very good explanation for some important keys. However, they still don't directly explain what the prefix "HK" stands for.

The table below links the informal contractions.
Informal ContractionRegistry hiveSupporting files
HKCCHKEY_CURRENT_CONFIGSystem, System.alt, System.log, System.sav
HKCUHKEY_CURRENT_USERNtuser.dat, Ntuser.dat.log
HKLM\SAMHKEY_LOCAL_MACHINE\SAMSam, Sam.log, Sam.sav
HKLM\SecurityHKEY_LOCAL_MACHINE\SecuritySecurity, Security.log, Security.sav
HKLM\SoftwareHKEY_LOCAL_MACHINE\SoftwareSoftware, Software.log, Software.sav
HKLM\SystemHKEY_LOCAL_MACHINE\SystemSystem, System.alt, System.log, System.sav
HKUHKEY_USERS\.DEFAULTDefault, Default.log, Default.sav
As regards the "HK" prefix, my understanding is that it stands for "Hive Key" since these are registry keys that are backed by a "hive of files".

Sunday, January 11, 2015

goto in bash

GOTO is frowned upon. Bash simply removed goto. this causes problems, particularly if you do have a need to skip to a particular section of code. Web search turned up various people recommending the use of functions instead. This still doesn't solve the problem of non-local resumption.

Closest I found was Bob Copeland's approach: http://bobcopeland.com/blog/2012/10/goto-in-bash/

It's an interesting use of sed to much up the script up to the "label" and then executing the script. Very Neat trick. I wanted something simpler within Bash itself.

My problem is pretty much the same as Bob indicates on his web-page: i have a script that must process things in several steps, each of which is time consuming and can fail. Re-running prior steps is prohibitive, so I need some mechanism to resume from where the script last failed (or close to it).

So here's what I came up with:



#!/bin/bash

function usage() {
  echo "$0 [step]"
}

label=$1;
if [ -z "$label" ]; then  label="step1"; fi

# do all common setup here. This stuff will be done each time the script is run

while true; do
 echo processing step [$label]; # to give a hint about where to restart from
 case "$label" in
  "step1")
      # add processing for step1 here
      label="step2"
      ;;
  "step2")
      # add processing for step2 here
      label="step3"
      ;;
  "step3")
     # add processing for step3 here
      label="end"
      ;;
  "end") echo done; break;;
  *)
    usage
    exit
    ;;
 esac

done

Here's how it runs:
 # /tmp/test_goto.sh 
processing step [step1]
processing step [step2]
processing step [step3]
processing step [end]
done

 # /tmp/test_goto.sh step2
processing step [step2]
processing step [step3]
processing step [end]
done


Thursday, December 18, 2014

I don't know C++ any more

C++11 seems to be a completely new beast. It also seems to be becoming more like Perl in that there's more than one way (now) to do it (TIMTOWTDI, read "tim-toady"). And this can be quite obfuscatory. Case in point:



#include <iostream>
#include <type_traits>

int main()
{
        typedef std::integral_constant<int, 2> two_t;
        two_t::type::type::type second;
        std::cout << std::boolalpha;
        std::cout << std::is_same<decltype(second), two_t>::value << std::endl;
        std::cout << std::is_same<decltype(second)::type, two_t>::value << std::endl;

        return 0;
}



The stuff highlighted in red above is superfluous, but completely legal per the syntax. And there are several more plays on this to really write obfuscated code.

Tuesday, September 23, 2014

Don't cross the streams...

Just re-learnt a lesson about stream extraction operators in C++. I had written a simple program that was "skipping bytes" and giving me grief. It took me longer than I would like to understand why this was happening. I should've known better :)
Here's the hexdump of the file I was trying to read: # hexdump -C data

00000000 01 34 07 02 00 2a 02 46 e9 37 66 00 00 e5 58 07 |.4...*.F.7f...X.| 00000010 02 1d 89 9c 13 07 02 1d 65 a2 80 0c 0e 06 07 02 |........e.......| 00000020 1d 65 a2 80 0c 0e 07 de 05 1e 10 00 00 00 07 de |.e..............|

And here's the code that was trying to read it:

#include <iostream> #include <fstream> #define STRING(x) #x #define WRITE_TO_STREAM(os, data) do { \ os << STRING(data) << "=>" << (data) << "," << std::endl; \ } while(0) #define READ_FROM_STREAM(is, data) do {\ data = 0;\ auto before = is.tellg();\ for(auto s = 0; s < sizeof(data); ++ s) {\ uint8_t byte = 0;\ is >> byte;\ data = ((data << 8) | byte);\ }\ std::cout << "read " << std::dec << data << " [" << std::hex << std::showbase << data << "]" << std::endl; \ } while(0) struct DataStruct { uint8_t first = 0; uint32_t second = 0; uint16_t third = 0; uint32_t fourth = 0; uint32_t fifth = 0; uint16_t sixth = 0; uint32_t seventh = 0; uint16_t eighth = 0; uint32_t ninth = 0; uint8_t tenth = 0; uint8_t eleventh = 0; uint8_t twelfth = 0; }; std::ostream& operator << (std::ostream& out, const DataStruct &d) { WRITE_TO_STREAM(out, uint16_t(d.first)); WRITE_TO_STREAM(out, d.second); WRITE_TO_STREAM(out, uint16_t(d.third)); WRITE_TO_STREAM(out, d.fourth); WRITE_TO_STREAM(out, d.fifth); WRITE_TO_STREAM(out, d.sixth); WRITE_TO_STREAM(out, d.seventh); WRITE_TO_STREAM(out, d.eighth); WRITE_TO_STREAM(out, d.ninth); WRITE_TO_STREAM(out, uint16_t(d.tenth)); WRITE_TO_STREAM(out, uint16_t(d.eleventh)); WRITE_TO_STREAM(out, uint16_t(d.twelfth)); return out; } std::istream& operator >> (std::istream& in, DataStruct &d) { READ_FROM_STREAM(in, d.first); READ_FROM_STREAM(in, d.second); READ_FROM_STREAM(in, d.third); READ_FROM_STREAM(in, d.fourth); READ_FROM_STREAM(in, d.fifth); READ_FROM_STREAM(in, d.sixth); READ_FROM_STREAM(in, d.seventh); READ_FROM_STREAM(in, d.eighth); READ_FROM_STREAM(in, d.ninth); READ_FROM_STREAM(in, d.tenth); READ_FROM_STREAM(in, d.eleventh); READ_FROM_STREAM(in, d.twelfth); return in; } void readDataFromFile(const std::string &fileName) { std::ifstream infile(fileName, std::ios::binary | std::ios::in); if(!infile) { std::cerr << "can't open file " << fileName << std::endl; return; } DataStruct d; infile >> d; std::cout << d; } int main(int argc, char** argv) { if(argc < 2) { std::cerr << "Usage: argv[0] binary-file" << std::endl; return -1; } readDataFromFile(argv[1]); return 0; }

The aim is simple enough: read members of a struct from a given binary file and then print out what was read to the console. For the most part, the program works fine, but every now and then on some file, a field would be skipped entirely (hexdump of file on which skipping happened is given above).
However, I observed a few bytes being "skipped". Particularly, instead of the expected:

uint16_t(d.first)=>0x1, d.second=>0x34070200, uint16_t(d.third)=>0x2a02, d.fourth=>0x46e93766, d.fifth=>0xe558, d.sixth=>0x702, d.seventh=>0x1d899c13, d.eighth=>0x702, d.ninth=>0x1d65a280,

uint16_t(d.tenth)=>0xc, uint16_t(d.eleventh)=>0xe, uint16_t(d.twelfth)=>0x6,

I got:

uint16_t(d.first)=>0x1, d.second=>0x34070200, uint16_t(d.third)=>0x2a02, d.fourth=>0x46e93766, d.fifth=>0xe558, d.sixth=>0x702, d.seventh=>0x1d899c13, d.eighth=>0x702, d.ninth=>0x1d65a280,

uint16_t(d.tenth)=>0xe, uint16_t(d.eleventh)=>0x6, uint16_t(d.twelfth)=>0x7,

Since I was using std::ios::binary flag, this really confused me for a bit. I kept staring at it till it finally (re) dawned on me that operator >> does formatted io.

At that point, the "fix" became trivial:

infile >> std::noskipws >> d;

Thursday, September 18, 2014

Common pitfall of new C++11 for-auto idiom

C++11 has brought in some developer friendly extensions. One of my most used ones is:

for(auto item : container) {
   // process item
}

this typically replaces the following clunky looking iteration pattern:

typedef std::list myIntList;
myIntList theList;
// skip ... code that populates the list

//iterate over list doing some process
for(std::list::const_iterator iter = theList.cbegin(); iter != theList.cend() ; ++ iter) {
   doSomething(iter);
   doSomethingToObject(*iter);
}


The constant repetition of the iterator definition was just busy work that didn't add to readability or efficiency. C++11 recognized and simplified the above software pattern to:


for(auto iter = theList.begin(); iter != theList.end() ; ++iter) {
   doSomething(tier);
   doSomethingToObject(*iter);
}


alternatively, in many cases the following truncation "works" too:

for(auto item : theList) {
   doSomethingToObject(item);
}


I qualify "works" because above is perfectly fine for very simple elements, but for larger elements and lists can incur a significant temporary allocation penalty because of a subtlety: item is a copy-constructed temp of a corresponding item in theList. This can lead to many more allocation,constructor and destructor,deallocation. Instead C++11 also provides:

for(auto & item : theList) {
   doSomething(item);
}


and

for(const auto & item : theList) {
   doSomething(item);
}


which still access and process each element in the container but use a reference, const reference, respectively. This has the advantage of avoiding the construct-alloc-destruct-dealloc penalty.

There is also a greater disadvantage in the first instance, in which doSomething accepts a non-const lvalue reference: the operation is on a temp and is non-persistent past the scope of the doSomething(item) statement.

Not seeing this can lead to several minutes/hours/days of consternation as your code would seem to have no effect. Alternatively, your bugs might be missed if all appears falsely-well.

It is now my personal preference to expect, enforce and use the idiom:

for(const auto & item : theList) {
   doSomething(item);
}


unless there's a compelling reason to use another approach instead. And even then to question before accepting.

references:
http://www.cplusplus.com/reference/list/list/cbegin/

Saturday, August 30, 2014

Zen and the art of system maintenance

I recently discovered that my linux box was woefully low on disk space. So much so that the boot partition was entirely consumed and so was the root partition. I had separated /boot, /home and / on to various partitions. While I reserved several GiB for /home, I reserved only and about 4.5G for / and about 200MiB for /boot.

And I'm still suffering because of it.

I was unable to update the system because it kept complaining that / didn't have enough space. The event that kicked me into action was that /boot ran out of space and some update message threatened imminent system failure.

I employed the services of Ubuntu's excellent disk usage analyzer. In my zeal to seek and destroy large space hoggers, I deleted several large files (more on which files these were and why they were left around, later), and several older kernels from /boot *manually*, e.g.

pushd /boot
sudo rm init*12.04-02*.
sudo rm vmlinuz*12.04-02*.

While this was sound in theory because I had newer kernels around, I still shot myself in the foot when I rebooted. I got the following message:

GRUB: error 15 file not found

Clearly, I had deleted something that had an entry in grub's menu.lst. Weirdly enough, none of the newer kernels which were left behind were updated in menu.lst. I think the reason /boot got consumed that rapidly was that Update manager failed to update menu.lst, but left the kernels in place.

So how to recover?

1. Downloaded UBUNTU rescue remix 12.04 iso and burnt it to DVD
2. boot with DVD
3. examine all hardware in system using lshw
4. determine which class of systems were present

sudo lshw -short

5. lshw -class disk

From the list presented, i saw that /dev/sda2 (ext3) matched my /boot in size and type, so I went ahead and mounted it:
6. sudo mkdir /mnt/sda2
7. sudo mount -t ext3 /dev/sda2 /mnt/sda2
8. pushd /mnt/sda2/boot/grub
9. sudo cp menu.lst menu.lst~
10. sudo vi menu.lst
11. for the very first boot entry (my default was set to 0, so changing the first would work), change init* and vmlinuz* occurrences to appropriate (latest) kernel image from /dev/sda2/boot
12. save file and reboot

Viola! working system. :-)
Joy to the world.


References:
lshw man page
ubuntu rescue remix 12.04

Friday, May 16, 2014

how to "recover" music files from an iPod

1. ensure iTunes auto-sync is disabled
2. attach iPod to Mac
3. run the following commands in Terminal:
defaults write com.apple.finder AppleShowAllFiles TRUE
killall Finder
4. Use finder to browse the iPod in disc mode (it should show up in the left panel as a removable drive) 5. copy the folder iPOD/iPod_Control/Music to wherever you want (e.g. ~/Music/recovered) 6. disable view all above: defaults write com.apple.finder AppleShowAllFiles FALSE
killall Finder

reference: http://macs.about.com/od/backupsarchives/ss/ipodcopy.htm

Monday, February 10, 2014

Geodetic coordinates and ECEF transformations

I ran into an interesting problem the other day: converting locations represented in latitude, longitude and altitude to an earth centered earth fixed coordinate system (and vice-versa). Now, there are umpteen resources (in print and out in the wild web) that present the formulae for doing this, but there was one particular result that was used as the basis of these that intrigued me. I wanted to know how that value was arrived at, but didn’t find any immediate obvious explanation of it on the web. So, I derived the relation for myself. Now, knowing how to get at that relation, I don’t ever need to remember the conversion formulae themselves, since I can always get at them (from a simple web search, or through first principles).

Let the LLA representation be denoted by a triplet (λ,ɸ,h), where λ,ɸ, h, denote the longitude, latitude, and height, respectively based on the geoid. Let (x,y,z) represent the corresponding coordinates in the ECEF frame. We wish to find the transformation:




The following figure shows the situation better

P is the point we want represented in both coordinate frames. P’P is the vector form the surface of the oblate geoid to the the actual point, with magnitude h. QP’ is normal vector through the surface of the geoid to point P. In the texts, magnitude of QP’ is N, and this is the subject of this page. Note the dashed ellipse representing the meridian ellipse. The meridian ellipse is defined as the locus of points with the same longitude (λ).

The coordinates are easy to derive, if N is given:


where 

a, is the semi major axis, e is the eccentricity of the ellipse. If b is the semi-minor axis, then a, b and e are related by:

Consider the meridian ellipse as shown below. The two cardinal directions in this ellipse are u,z, where the u-axis is along the projection of P on the equatorial plane.

To determine N, we must first determine the intercept, Q, on the z-axis. If we can find the slope to the line QP’, then we can determine the equation to the line QP’, which will allow us to determine Q, and consequently N. 

Since QP is normal to the ellipse (by definition), it must be perpendicular to the tangent at the point P’(phi,lamda,0). The slope of the tangent at any point on the ellipse maybe computed as the first derivative of the equation of the ellipse. which is given by:




i.e. 

Given a line with slope m, the slope of a line perpendicular to it is given by -1/m, i.e. the slope of line QP’, m’, is:

Consequently, equation of line QP’ is  whence, Q is given by setting u = 0, i.e. 




N can now be easily computed: 

Since 



Now, z0 can be computed as 


Therefore, projection of QP on z-axis is 








Wednesday, January 8, 2014

C++ name demangling

C++ meta-programming is great, except when you need to debug compiler errors related to templatized code.

e.g, i got the following error

duplicate symbol __ZlsRNSt3__113basic_ostreamIcNS_11char_traitsIcEEEERKN7myclass7myfunctE in file.o

What does this gobbledegook mean?

c++filt to the rescue!

c++filt __ZlsRNSt3__113basic_ostreamIcNS_11char_traitsIcEEEERKN7myclass7myfunctE

operator<<(std::__1::basic_ostream >&, myclass::myfunct const&)

so, I have a stream insertion operator that's multiply defined! On to the next clue, Watson!

Thursday, January 2, 2014

Woes of dual-booting and DHCP NACK on Linksys E2500

I recently upgraded my main router to the Linksys E2500. Overall, I've been quite pleased with the router's performance, and while I haven't logged enough time yet to observe all its behavior quirks, I did notice something strange on my dual boot machine.

My linux server dual boots between Linux (primary) and Windows XP. Each time I change from Windows to Linux, I find that the machine doesn't have any network connectivity. Initially I suspected my client-bridge, but it looks like the isn't the client bridge, but DHCP (still haven't figured out if it is the DHCP server in the Linksys E2500, or the DHCP client implementation in Linux, or the DHCP client in Windows XP, or some packet munging but in the client-bridge).

Each time I boot into Linux, the network interface is brought up during boot (per the instruction in /etc/network/interfaces). Usually this works just fine, but IF I had been running Windows XP prior to booting into Linux, it seems that the DHCP server on the Linksys sends a DHCP NACK causing the DORA to kick-in. Funnily, after the first NACK, the DHCP server stays silent to all subsequent requests resulting in loss of connectivity on the Linux box. Windows XP doesn't seem to suffer from this if I reboot into Windows.

Once the workaround below is implemented, subsequent DHCP requests from Linux keep working till I boot into Windows and back to Linux.

Work arounds:

a) setup Linux network interface manually
b) delete the DHCP address entry for the Linux tower from the router's DHCP clients table
c) run dhclient on the Linux tower to renew DHCP lease

Setting up linux interface manually

sudo ifconfig eth0 up address 192.168.1.101/24
sudo echo nameserver 75.75.75.75 >> /etc/resolve.conf
sudo dhclient -v eth0

the -v in the dhclient causes debug verbose information to be printed to the screen during the DHCP request / discovery process.

Deleting the DHCP address entry is accomplished as below:
a) log into the router
b) Status | DHCP clients table
c) delete DHCP entry
d) save settings

running dhclient
sudo dhclient -v eth0

confirm that /etc/resolv.conf has been updated

Further research to determine if the issue is in the router or windows, or linux or the dd-wrt client-bridge:

a) run wireshark to examine the DHCP request packet from Windows and see if it is different from the one in Linux. E.g. if net mask is different, but requested address is the same, then DHCP server might think that the address was assigned by another
b) connect the Linux tower directly to the Linksys E2500
c) switch out the client-bridge for another
d) check with other dual-boot systems if the same thing is happening

background reading:
http://support.microsoft.com/kb/169289
http://support.microsoft.com/kb/167014




Thursday, December 5, 2013

Scale your expectations. A humble request to web designers from a mobile user to not restrict user scalability

I browse the web. 

Extensively. 

Both while I'm at work (for work purposes, of course), and when not (which means I'm researching something or reading up on what I should be working on next). In this endeavor, I employ any web capable device I can lay my paws on, be it a desktop (Linux, Windows, Mac), a laptop (MacBook Pro, Macbook Air, Ideapad Yoga, Satellite), a tablet (iPad Air, iPad 2, iPad Mini with Retina display), a smart phone (iPhone), or an e-reader (Kindle e-Ink, word!). And if I'm really, really, really desperate, through the embedded web browser on my TV. 

All of which means, that I experience the web in about as many ways as one possibly can (sorry Lynx hasn't gotten any love in a long, long, long time), and often have to contend with websites not necessarily designed with every browser in mind. I'm normally quite considerate to the web-site designer, especially if I'm using a device that probably isn't very widely adopted (e.g. kindle e-ink or TV browser); it is a special situation created by my choice of non-popular device, and so the onus of consumption is on me. On the other hand, quite often, my medium of consumption is an iPad or iPhone, and I'll land on a website that is "mobile friendly", and yet is infuriatingly ill fit to be browsed on a mobile screen. These are the ones I take strong exception too.

While mobile device capabilities have increased astronomically since the days of the hand-held monochrome LCD "video" games, mobile devices still often remain constrained in their screen real-estate (some necessarily due to their functionality: I can't imagine favoring a device the size of iPad Air as my phone for any length of time over a better sized hand-held). There is a fair number of websites that are still not "mobile friendly", i.e. which were designed with the facilities of a desktop browser and desktop sized screen in mind, and consequently browsing these on a hand-held requires constant panning and zooming. I don't take exception to these.

I don't begrudge these web sites and authors because the non-mobile websites were probably created before the mobile phenomenon, and haven't been updated since (either due to passivity of the original authors, or the lack of funds etc.).

The websites of late where I do take exception, are the ones that profess to be mobile "friendly", and in their efforts to be mobile compatible, go a little too far. I'm talking about the websites that recognize the mobile browser and prevent viewport scaling. What this means is that the user is UNABLE to pinch-zoom and pan to read the material.

I find this decision by web-designers increasingly frustrating. A humble and sincere request to all web-designers and web-masters of the world: PLEASE, PLEASE PLEASE, (emphasis and stress to indicate pleading and not yelling), do NOT limit the scaling, unless there is very good justification for why you need to solidly fix the web-site layout. If the text size isn't big enough, it makes for an incredibly infuriating experience for the reader. 

Just as law makers cannot know all the ramifications of their regulations on the society at large. web-designers too cannot ascertain how their users would prefer to consume their material. Unnecessary and ill-designed restrictions only lead to frustration in the populace.

I understand that not all content creators might be aware their template has this restriction built in. So here's a tip on what "causes" this particular frustration, and how to resolve it.

Website scaling is typically restricted with a meta content tag as below:

meta name="viewport" content="width=device-width; initial-scale=1.0; maximum-scale=1.0; user-scalable=no"


notice the maximum-scale and user-scalable fields. 

If a web site does not have a very strong functional need to stay fixed, please consider dropping the maximum-scale property and setting the user-scalable to yes (user-scalable=yes). 

Your mobile users will thank you. I certainly will. 


Friday, October 18, 2013

Smash the Dashboard

Microsoft introduced widgets in Vista. These were tiny little programs which were supposed to provide you instant access to information. Minimize your windows and you could see your information right away. These digital tchotchkes were a bad idea: they hung around in your machine all the time, draining cpu cycles and resources while providing very little information. Also, it took some doing to get rid of them.

Truth be told, these seem to be borrowed from Mac OS's Dashboard. MacOS's Dashboard widgets and apps have the same idea. They sit generally unseen on the Dashboard and present themselves when you CTRL + LEFT ARROW from your desktop. These were meant to be cute and impressive. Tchotchkes  nonetheless. And just as useless, and just as terrible an idea.

I usually find the Dashboard to be more a nuisance than an aid. CTRL + LEFT ARROW when editing a document, and boom, paradigm shift to a screen that serves very little purpose other than to flummox and annoy. No thanks!

So here's how you disable it in Mac OS X:

defaults write com.apple.dashboard mcx-disabled -boolean YES
killall Dock


source: http://www.macworld.com/article/1046236/disabledashboard.html

to turn it back on (why? Seriously, why would you?), do
defaults write com.apple.dashboard mcx-disabled -boolean NO
killall Dock


Hasta la vista, Dock!

Wednesday, September 11, 2013

Here goes whoopsie!

Mucking around in my system, I noticed a file  /var/lock/whoopsie. Since I had no idea what this was and where this came from, I was naturally suspicious. A couple of web searches answered the question for me. Apparently, this is Ubuntu's take on Dr. Watson, or crash-reporting.

I wanted it turned off, and here's how I did it:

pushd /etc/default/
sudo cp whoopsie whoopsie.backup
sudo vi whoopsie

change report backups to false. I.e.

diff whoopsie.backup whoopsie
3c3
< report_crashes=true
---
> report_crashes=false

sudo stop whoopsie

references:
http://askubuntu.com/questions/135540/what-is-the-whoopsie-process-and-how-can-i-remove-it

Wednesday, August 28, 2013

Getting the prompt to look right in MacOS X

New Macbook Air. Yessir! The Lenovo went back, and instead came along a Macbook Air, core i7, with 4GB of RAM and 512GB SSD sweetness. The battery life is amazing. Other things still need to be worked out, like you know, the immediate lack of Excel (and other Office products). Numbers is able to open and work with some basic Excel workbooks, but none of the shortcuts are around (obviously). And Pages is no Word (ya think?).

Also, one of the first things I did was get iTerm2 on there. I prefer it to the native Terminal application. E.g. it allows you to draw a window border so that you can see where overlapping terminals are. Plus I like the fact that I don't have to futz with the background (it comes with a dark background and white text by default).

And since I got terminal going, I need to colorize the bash prompt (naturally!). Here's a useful bash snippet.


cat ~/.bash_profile
PS1='\e[1;32m\u\e[m@\e[0;31m\h\e[m:\e[2;33m\w\e[m # \n\$ '
# PS1='\u@\h:\w # \n\$ '
alias ls='ls -G'
alias dir='ls'
alias grep='grep --color'
alias rebash='source ~/.bash_profile'

resultant output looks like below:

user@machine:/your/path #
$


The prompt colorization is in the first line of the script that describes the shell variable PS1. Colors are specified as \e[x;ym stuff you want to colorize \e[m
The \e[m are shell escape markers. Think of them as "quotes" for the shell.

More references here:
http://en.wikipedia.org/wiki/ANSI_escape_code
http://en.wikipedia.org/wiki/Control_character
http://www.tldp.org/HOWTO/Bash-Prompt-HOWTO/x329.html


Thursday, August 15, 2013

Living with Windows 8 and Lenono Yoga 13, part 2

I just returned the device back to the Microsoft Store. I was quite disappointed because I had really wanted to like and use the machine.

I really liked:

  1. that you could stand it up so that you could watch media handsfree, e.g. when cooking
  2. the screen resolution. The 1600x900 display was crisp and made images pop. Bing home page, particularly, looked awesome each day
  3. how easily and without a fuss it integrated with my existing printer / scanner. No fuss what so ever.
  4. UPnP discovery and display capability. It discovered my SonyTV right off the bat!
  5. the general snappiness of Windows 8
  6. Office 2013 ran like a charm


What bothered me:

  1. to log in to windows, I needed wifi. Seriously. Windows 8 during the signup process asked for a Windows / Microsoft email id to be created / used. I did use one. And then I created a couple of other non-admin (local) users as well. However, without a network connection, it would only allow the last used account to log in. If this happened to be the admin account, then any other local user essentially is completely locked out. EVEN THOUGH THEY HAVE LOG-IN PRIVILEGES to their own local account. I found this product decision baffling.
  2. undiscoverable gestures. I wrote about this the last time.
  3. on-screen keyboard is flaky, particularly with Metro IE. Multiple account users were unable to type in Facebook. Not being able to post in Facebook in the tablet mode, or the stand mode, or the tent mode, was a huge downer. This is not acceptable for a $1000 machine. There were several posts about it online as well. Main issue seemed to be updates. I did update mine several times to no-avail. http://answers.microsoft.com/en-us/windows/forum/windows_8-tms/windows-8-touch-keyboard-keeps-disappearing-as/8449af91-84da-4013-9d6d-cbc97b3a7aec.
  4. high contrast mode was iffy. Kudos to Microsoft for trying to make high-contrast mode work intelligently. I typically use this mode in the night to reduce eye-strain since it changes the typical bright white background to black, and the makes the text white. This reduces the total light intensity hitting your retinas causing your pupils to stop having to compensate by tightening, thus easing the strain on the eyes. iOS devices achieve this by a chainsaw technique of simply inverting all colors. It's not perfect, but it works well, e.g. the pictures are ghostly and one needs to switch back and forth between the modes. Microsoft's approach was to intelligently change the color scheme for each app. This had the pleasant surprise of making pictures look right while inverting the text. It was a beautiful experience. However, it just didn't work right because on some website, entire swaths of information would be missing. Particularly on financial websites important charts and text sections would just be absent in the high-contrast mode. I couldn't figure out why. With the iOS approach, yes, you still had the inconvenience of having to switch back and forth, but you knew that a section of the page had information that might interest you, and gave you (perhaps only coincidentally) the choice of reverting back to see the data in the original light. And, it also made the switch easy by allowing the action to be programmed in a simple action (triple click of the home button). No such luck in Windows 8. I really, really would prefer the Windows8 inverted mode experience if they fixed the missing content issue since it obviates the need to deal with ghostly pictures (when they are visible).
  5. pages would randomly increase or decrease in size. This happened not just in tablet mode (possibly due to inadvertent dual point touches on the screen, though one could argue that the software should intelligently detect and compensate for these), but also in laptop mode. Even with palm detection on the touchpad turned to the max.
  6. a side effect of e) above was that during editing, the mouse/ cursor would jump to an unrelated area of the text, munging your editing. this was quite disturbing, especially when using excel since the result wasn't limited to easily identifiable spelling errors; unintended numerical typos have a tendency to cascade through computations without being easily identified.
  7. constrained diskspace. the device came with 128GB of disk-space. True, I could've coughed up ~$400 more for an i7 version with  256GB of space. But, the fact remains that out of 128GB, with barely the OS, Office and anti-virus installed, I only had about 90GB left. That might be ok for most folks but given the volume of data generated in my household (photos, software, documents), it seemed a little constraining.


On the plus side, I really liked with Microsoft Store experience. They took back the device with nary a fuss. They did ask me what was wrong with it and offered to help address the issue, but when they saw it, they took it back. They also offered the Microsoft Store bag back to me. Makes a good recyclable grocery store bag!

That said, I might go back to the Windows 8 / Yoga combination again. I do believe it is by far the best combination out there. It's just that it needs a little more baking. Hope MS / Lenovo reps scour the interwebs, find this page (and many others like it) and make the needed changes. It would then be a solid offering!



Thursday, August 1, 2013

Living with Windows 8 and Lenovo Ideapad Yoga: Part 1

About two weeks ago, I killed my faithful Toshiba M205-S4806. Rest in peace my friend, you worked hard for 5 years and deserve the rest (and died only because I fried your motherboard by performing "open heart surgery" on you on the carpet. Mea maxima culpa) . Well, that brought me in the market for a new machine. And this time, I had a choice: go for a Mac Book Air, or stay in the Windows camp with one of its competitors: an Ultrabook.

Truth be told, I liked the Toshiba a lot. Even though it ran Vista. I know Vista was panned by the larger audience while Windows 7 was loved. I personally, never had too much trouble getting to use Vista (including the UAC! I actually enforced it in my home by creating separate Administrator and "normal" user accounts, to good effect). And I never found too much of a difference between Vista and Windows 7 (to me they were practically the same OS, with different marketing and perceptions. Vide: Project Mojave). Anyhow, I liked the Toshiba because it was a solidly built machine, with a no non-sense touchpad, and a well laid out keyboard. I particularly liked the keyboard lay-out because the keys were nice and big, had enough travel for tactile feedback, yet quiet enough to type in bed without waking up the one sleeping next to you. It also had a neat column of home, page up, page down and end keys making it easy to find these and control the cursor without the use of the mouse. I also liked how the keypad was arranged so that it didn't accidentally move the cursor when your palm brushed against it (actually the layout entirely prevented that possibility).

Compare that with the build of the latest crop of Windows Ultrabooks: they all look like Mac wannabes, with large touchpads and baffling keyboards (particularly the models from Asus, Acer, Samsung and Sony). I wouldn't mind the large touchpads personally, but what ever their arrangement, they go berserk at the slightest brush of your palm on the touchpad. Mac machines don't seem to suffer from this at all!

For the record, I find the Mac keyboard weird because of its refusal to provide the quick navigation keys (home, page up, page down and end). Some Windows machines (Acer, Sony) take another baffling approach in their attempts to ape the Mac: requiring you to press the function key in addition with miniaturized arrow keys to move the cursor around. This is "non-standard" at best. I realize there never really was any standard on the keyboard layouts, but the de facto layout with Windows (and Linux) keyboards was radically different and a I'm sure large majority of PC users are unfamiliar with this new layout.

I'm not sure where the impetus to mindlessly ape a competitors product comes from, but I do know that it comes at the expense of familiarity and, consequently, productivity. And productivity has been the strong suite of Windows machines against the Mac for the longest time.

One might question as to why I'm drawn to the Ultrabooks? Why not focus on the many many cheaper models for a functional, value oriented laptop?

Honestly, if the lower priced models were any good, I'd jump at the opportunity. However, from what I've seen from nearly a month of online and in-store browsing, most of the cheaper models tend to sport 15" or 17" screens (are the overcompensating for something?). These are too large and unwieldy for my liking, particularly because they choose to mess up the keyboard by including the additional number pad. I understand that might be a plus to some, but it doesn't work for me in a laptop form factor. I have my desktop for that, thank you.

It seems the Windows market is facing tectonic shifts of sorts. There already was secular shift towards mobile computing (laptops, netbooks gaining market share at the expense of the aging desktop). However, there also seems to be a schism in the laptop market now: one set seems to focus on the ultra price-conscious but non-tech savvy low margin, high volume segment, while another seems to be going at the high price, feature rich, higher margin market. 

The low end of the market is ostensibly threatened by the tablet form factor. However, there is still a need for a productivity option in this segment. Yet, due to the tablet threat, if volumes are to remain the same, there must be severe pressure on the margins (and manifestly, there is).

The high end is competing head-on with the likes of the Mac Air. While productivity focused individuals will always be drawn to a Windows machine ("Excel just doesn't feel the same!"), there is a strong desire in this segment to not be one-upped by a Mac in appearance. Hence, the germination of products like Toshiba's Kira lineup. This means that the middle ground of value conscious, feature rich, productivity focused laptops seems to be drying up (much like the middle class?).

Kudos the to the Lenovo Ideapad Yoga for bringing a useful set of features in a very usable package: the keyboard is sane (island style keys with ample travel, no backlight but that's ok; neat column of home, page up, page down, end keys), nice 1600x900 touch screen, impressive black overall color scheme. My specimen sports a snappy 128GB SSD, but no DVD/CD drive. There is also an SD card reader, an HDMI port and two USB ports (the one on the left is a charge-when-closed USB 3.0 port).

But the coup de grace is clearly the double hinge that allows it to go from laptop to tablet in a flash! I went ahead and spent another $30 getting myself the Lenovo keyboard sleeve that protects the keyboard in the non-laptop modes ("tent", "stand", and "tablet").

All this, and I haven't even gotten to using the OS itself!

Windows 8 has oodles of potential! The start screen is beautiful, the icons are eye-catching, the Metro apps are absolutely gorgeous to look at. And on the 1600x900 screen of the Lenovo Ideapad Yoga, they really, really come alive. I love the look and feel of the Xbox music app, especially when it's playing songs with artist pictures: the show of those images is outright captivating! The Yoga is by far the most innovative Ultrabook I've come across: it allows you to go from laptop to tablet in seconds. It is guaranteed to ace the "amaze the wife" test!

And yet, there are instances where a power user such as myself is left scratching his head.


Initial interactions with the device and OS bring several questions to mind:
"What are all the gestures that work with this device?" And, "where are the hot target areas, you know like the 'Charms bar' that has quick access to search, the 'Start' menu and settings?"

None of these are easily answered by self-discovery.

There is a quick tutorial when you setup your machine, but it only talks about bringing up the Charms bar, and that too only the one time during setup. There are no subsequent visual cues to prompt the user to look at a particular edge of the screen.

Following is a list of the actions I've discovered so far. I'm sure there are pages and pages out there written about this, but I'm listing these to show that in a week's worth of owning this device this is all I've discovered so far. And mind you, I'm not a tech ingĂ©nue.

NOTE: "Swipe" below means "contact the screen with one finger and while maintaining contact move the finger" in a particular direction. Where more than one digits are required, the number of fingers will be mentioned (e.g. "two finger scroll")

  • Summoning the Charms bar: starting at the right edge of the screen, swipe right to left
  • Clock and battery indicator: when the Charms bar is summoned, this appears on the lower left corner of the screen
  • Switching between open applications: from the left edge of the screen, slowly swipe left to right to bring in the "next" application. I'll explain "next" shortly.
  • Task list or list of open apps: starting from the left edge of the screen, slowly swipe left to right to start bringing in the next application, but instead of going towards the middle of the screen as above, reverse going right to left back to where you started from. When you reach the right edge, a column appears with all open apps. You'll also find the "Start" screen at the bottom of this column.
  • Screen splitting: starting from the left edge of the screen, slowly swipe left to right to start bringing in the next application. Instead of letting go, hold it and a separator appears. Initially, it snaps to the left third of the screen. If you let go, that's what your app will occupy. If you drag the app all the way to the right, then it'll instead occupy the right third of the screen. Once the screen is split, the separator sports a "handle" (looks like a vertical ellipsis) that can be dragged to switch the ratio of screen real-estate between the two apps. Screen splitting is limited to these two orientations: left third, or right third. Also note that this is different from the Windows 7 "Desktop" screen tiling where you can snap one application to half the screen by dragging it to the top of the left or right half of the screen. You can still do that with "Desktop" apps in the "Desktop"
  •  (At this point, if you're asking what's the "Desktop"? Well, the "Desktop" is what one normally sees when one logs into Windows 7. "Metro" is the new Windows 8 colorful live-tile "Start screen".)
  • App context menus: short swipe from the bottom of the screen towards the center; alternatively, short swipe from the top of the screen to the middle
  • Killing a Metro app: starting at the top edge of the screen, swipe towards the middle of the screen and keep touching the screen. the app window will shrink and float with your finger. Move this to the bottom edge of the screen. The window disappear as the app is closed. This was by far the most non-intuitive action that I discovered only through sheer serendipity.
  • Pinch zoom: this action is most used in Internet Explorer, and one that is most natural and familiar to users, especially coming from a smart phone.
  • Page back/ forward: Swipe right / left. This is most used in the Metro Internet Explorer app. Also to be found in book reading apps such as the Amazon Kindle app
  • One finger scroll: Swipe up or down using one finger on the screen to move a list up or down. Anyone who's used the current crop of smartphones should find this very familiar.
  • Two finger scroll: Using two fingers, swipe vertically on the touchpad (not the screen) to move the page. The direction and sensitivity of scroll can be controlled by the Synaptics TouchPad driver. This is probably most familiar to Mac users

Agreed, some of the above gestures are quite natural, and especially so to a generation familiar with modern smartphones. However, some gestures, such as the one to close a Metro app, are rather difficult to discover and execute without any visual cues or hints.

I'm inclined to think that many of these gestures would fail the "wife test" or the "grandmother test". Contrast this with the iPad / iPhone gesture set which not only excel at the "wife/grandmother" test, but epitomize the "toddler test", and actually serve to democratize computing. Windows 8 seems to score a D- here instead, despite oodles of potential; like an autistic or dyslexic math-genius struggling with spelling.

Part 1 conclusion:
Windows 8 looks beautiful, Lenovo Ideapad Yoga is by far the most promising, feature rich, value oriented convertible Ultrabook in the market now. The combination still needs some more work: Windows 8 with its schizophrenic, dissociative identity disorder, and the Lenovo hardware with its touch pad and keyboard quirks.

Part 2 will be about using productivity apps such as Microsoft Office (Word, Excel, Powerpoint, Outlook) on this device running Windows 8

 

Saturday, June 1, 2013

Firefox, Gmail, and the flash plug-in

My computer seemed slow. Everything was sluggish even though the "only thing" I was using was the Firefox browser. I did have several tabs open, but none that I knew to be media or script intensive. Yet, there was slowness. A quick glance at the task manager revealed that FlashPlayer plugin was consuming an entire core of my processor, which surprised me, since nothing should've been playing any videos.

So I killed it.

Processor seemed to go idle, things became a little snappier. Nothing seemed to have been affected. How is that possible? Was there a virus or trojan of some sort, I wondered. I got my answer when I tabbed back to my open gmail page. There, on the top, like an epiphany was displayed a bar across the tab: "Flash player plugin has crashed. Click here to relaunch page".

"What?"

Slowly, the wheels of thought groaned into motion: what is it on gmail's page that *might* use flash? Video / web-calling and Chatting seemed possible suspects. So I checked gmail's settings to see what was on, and if anything was using flash. Sure enough, clicking on chat settings showed me the following option:

Sounds:
- Play a sound notification when new chat messages arrive. Requires Flash.

changed to "sounds off". 

Recap: it was Flash player, in the Gmail tab, with the inappropriate chat settings. Clue!


Wednesday, January 16, 2013

Constructor parameters can have the same name as member variables

This just blew my mind today:


#include 

struct Point {
    Point(double x = 0, double y = 0) : x(x), y(y) { }
    double x, y;
    // No destructor needed
};

int main()
{
   Point p(10.0, 5.6);
   std::cout << p.x << " " << p.y << std::endl;
   return 0;
}


Came across the struct definition above when browsing the following link:

http://www.drdobbs.com/cpp/teaching-c-badly-introduce-constructors/229500116

As for the article itself, i'm completely comfortable creating structs with public data members. I try to avoid constructors for these if I intend these to be passed around as message payloads. Creating a constructor destroys the POD-ness of the data structures implying that the structures are copyable, but not assignable.

It also follows that if a structure is going to be used as a message payload, it should not contain pointers or references to other objects outside the structure itself since these can go away, leaving dangling references.

Keeping message payloads POD also has the advantage that these can be passed to C functions / handlers / libraries (though one has to watchout for packing differences there)

If the requirements necessitate a deviation from these (need for a reference or elaborate allocation), then it is absolutely necessary to allocate a constructor, a destructor, a copy constructor, an assignment operator, and r-value versions of the same ("rule of 5")

Friday, November 30, 2012

Static member functions and const-ness

A colleague of mine was a little miffed with C++ a few days ago. She was trying to create class with a static member function, but one that was doing a simple translation. Since it wasn't modifying any class data (static or member) and was doing a pure translation, she defined it as below:



#include <string>

#include <cstdint> //need c++0x

class SomeClassWithStaticAndPrivateData {
public:
  enum EnumInt {
    Unknown,
    Zero,
    One,
    Two,
    Three
  } ;
  //snip

  #define wart_TRANSLATE_CASE(enumInt) if(str.find(#enumInt)) return enumInt;
  static EnumInt translateStringToEnumInt(const std::string& str) const {
     wart_TRANSLATE_CASE(Zero)
     wart_TRANSLATE_CASE(One)
     wart_TRANSLATE_CASE(Two)
     wart_TRANSLATE_CASE(Three)
     return Unknown;
  }
  #undef wart_TRANSLATE_CASE
private:
  static int32_t sInt;
  int32_t instanceInt;
};


Sure enough, the compiler gave the following error:


error: static member function ‘static SomeClassWithStaticAndPrivateData::EnumInt SomeClassWithStaticAndPrivateData::translateStringToEnumInt(const string&)’ cannot have cv-qualifier


Her assertion was "Why is it wrong to indicate to the compiler that I'm defining a static function, and that it isn't going to change any static variables of the class at all?"

At first blush, her idea seems perfectly reasonable: you want the compiler to provide protection against any inadvertent modification of the class' data members. So why doesn't the C++ standard allow this? Is it just an oversight?

Well, I don't think so. I think the C++ standardizers, whatever might be their faults, did spend quite some time thinking about the specific capabilities and features of the language. I think they are battling against the notion of a false sense of security, while trying to maintain consistency.

What does it mean to have a const member function? And what does it mean to have a static member function?

non-static data members are the norm. These are data members that are created for each instance of a class. static members on the other hand, belong to the class, and not to any particular instance or object. They are created once, and are shared by all object instances of that class.

Similarly, non-static member functions (methods) are the norm. These are methods that take an implicit pointer to the object they are associated with: the "this" pointer. Any instance data members accessed can be thought of being accessed via a this->data_member dereference. Since such an access is the norm, the "this->" dereference operation may be omitted for notational convenience, but it is nonetheless present.

Using the class above, the function foo can be looked up as a function of the type:

void SomeClassWithStaticAndPrivateData::foo(SomeClassWithStaticAndPrivateData* this);

A static member function on the other hand, since it belongs to the class, and is shared by every instance of the class, does NOT need to refer (or in other words cannot refer to) a particular instance of the class. Consequently, it does not need a special / implicit "this" pointer.

A static member function is therefore very much like a plain vanilla "C" function in that there is no hidden pointer passed implicitly by the compiler. it's signature therefore remains clean (no name-mangling, but that's a topic for another day when we examine nm and c++filt). The public static member function can be approximated to a simple "C" function protected by a name-space that just happens to be namespace of the class it is defined in. Also, it can only access other static data members and methods within that class. continuing with the example above:


namespace SomeClassWithStaticAndPrivateData {
 void static_foo() {
   sInt = 42; //ok to access static data members
   //instanceInt = 0; //ERROR: cannot access without a this*
 }
}


The const qualifier on a non-static member function is a promise to the compiler that the method will not modify any data members (static or otherwise). This is achieved by passing the "this*" as a const pointer of the class. Continuing with the example above, the const_foo function may be looked upon as:

void SomeClassWithStaticAndPrivateData::const_foo(const SomeClassWithStaticAndPrivateData* this);

Which brings us to the issue with the const qualifier of a static member function: since it doesn't take a this*, there is nothing to which the const qualifier can be applied.

More philosophically, if we did allow the const qualifier on a static member function, it would be difficult for the compiler to guarantee that a static member function declared const wouldn't actually modify any data member. Consider the contrived example below. static_const_foo represents the hypothetical abomination that is a static method with a const qualifier.


//defining global and static vars 
int32_t *p_global_Int = NULL;

//defining static data members
int32_t SomeClassWithStaticAndPrivateData::sInt = 0;

void SomeClassWithStaticAndPrivateData::foo() {
   if(NULL == p_global_Int) {
      p_global_Int = &sInt;
   }

   // do other processing

}

void SomeClassWithStaticAndPrivateData::static_const_foo() const {
   if(NULL != p_global_Int) {
      (*p_global_Int)++;
   }
   //try to do something else
}

In of itself, the definition of static_const_foo() is completely legal. It is taking a global pointer, null checking it for bonus points, and then incrementing the non-null referenced integer. Perfectly valid.

Similarly, the definition of foo() is also completely legal. If the global pointer is not already filled, it innocently points it to the class' static data member. For additional horror points, it actually even points to private data, breaching any protection that compiler might offer, but that's a separate story (for details, checkout Scott Meyere's excellent book "55 Specific Ways to Improve Your Programs and Designs, pp. 126, Item 28 " Avoid returning handles (references, pointers or iterators) to object internals"). What it did is completely legal and well within its rights and the rules of C++.

Taken together though, they represent not only a moral sin, but an actual violation of the guarantees purportedly provided to the compiler: a function that declared itself as not changing any static data members exhibits the highest depravity by fondling its supposedly const private members. And there would be no easy way for the compiler to detect, let alone prevent this. Since this level of service cannot be guaranteed, it is NOT provided by the compiler. And since this level of service cannot be provided, it is best to explicitly disallow it so that the developers are aware that they are on their own in this regard.

Tuesday, November 27, 2012

Installing Brother MFC-J430W on Ubuntu 12.04.1 LTS

Yep, upgraded from my Brother MFC-210C to MFC-J430W. Thanks OfficeMax for the awesome Thanksgiving sale. Now, we have wireless printing in da home! This allows me to print from my iOS devices as well!

Brother's instructions helped me get up and running quickly, but since they are trying to cover several setup configurations all at once, it can get a little confusing. Here's what I did to install it on my fresh Ubuntu 12.04.1 LTS install.

Determine drivers needed
we need to figure out what drivers to install for Ubuntu 12.04.1 LTS

From the following page, it looks like it is brscan4 http://welcome.solutions.brother.com/bsc/public_s/id/linux/en/download_scn.html#brscan3

brscan4 models
DCP-7055 DCP-7055W  DCP-7057  DCP-7060D  DCP-7065DN  DCP-7070DW  DCP-8110DN
DCP-8150DN  DCP-8155DN  DCP-8250DN  DCP-9055CDN  DCP-9270CDN  DCP-J140W  DCP-J525W
DCP-J725DW  DCP-J925DW  FAX-2950  FAX-2990  HL-2280DW  MFC-7240  MFC-7290
MFC-7360  MFC-7360N  MFC-7362N  MFC-7460DN  MFC-7470D  MFC-7860DN  MFC-7860DW
MFC-8510DN  MFC-8515DN  MFC-8520DN  MFC-8690DW  MFC-8710DW  MFC-8910DW  MFC-8950DW / MFC-8950DWT
MFC-9125CN  MFC-9325CW  MFC-9460CDN  MFC-9465CDN  MFC-9560CDW  MFC-9970CDW  MFC-J2510
MFC-J280W  MFC-J425W  MFC-J430W  MFC-J432W  MFC-J435W  MFC-J4410DW  MFC-J4510DW
MFC-J5910DW  MFC-J625DW  MFC-J6510DW  MFC-J6710DW  MFC-J6910DW  MFC-J825DW  MFC-J835DW


Since I have a 32 bit install, i'll be going with the following:
brscan4 32bit  deb  0.4.1-2  61 KB  2012.Oct.09
scan-key-tool 32bit  deb  0.2.4-0  45 KB  2012.Oct.09

Found at: http://www.brother.com/cgi-bin/agreement/agreement.cgi?dlfile=http://www.brother.com/pub/bsc/linux/dlf/brscan4-0.4.1-2.i386.deb&lang=English_lpr

and http://www.brother.com/cgi-bin/agreement/agreement.cgi?dlfile=http://www.brother.com/pub/bsc/linux/dlf/brscan-skey-0.2.4-0.i386.deb&lang=English_lpr

Pre-requisites

found at:http://welcome.solutions.brother.com/bsc/public_s/id/linux/en/before.html#prereq

List of pre-requisites: http://welcome.solutions.brother.com/bsc/public_s/id/linux/en/before.html
Pre-required Procedure (2) (sic)
    Related distributions
    Ubuntu8.04 or greater
    Related products/drivers
    cupswrapper printer/PC-FAX drivers
    Requirement
    1. "sudo aa-complain cupsd" command is required before the installation.
    2. "sudo mkdir /usr/share/cups/model" command (as it is) is required before the installation. 

NOTE:aa-complain changes the enforcment of security policies so that instead of aborting the offending operation, a complaint is written to the syslog instead. From the aa-complain man-page:


 aa-complain is used to set the enforcement mode for one or more profiles to complain. In this mode security policy is not enforced but
       rather access violations are logged to the system log.

Driver download page: http://welcome.solutions.brother.com/bsc/public_s/id/linux/en/index.html

Download the lpr driver
 http://www.brother.com/cgi-bin/agreement/agreement.cgi?dlfile=http://www.brother.com/pub/bsc/linux/dlf/mfcj430wlpr-3.0.0-1.i386.deb&lang=English_lpr

Download cups driver
 http://www.brother.com/cgi-bin/agreement/agreement.cgi?dlfile=http://www.brother.com/pub/bsc/linux/dlf/mfcj430wcupswrapper-3.0.0-1.i386.deb&lang=English_gpl

Install the lpr driver per the following instructions. I'm linking to the original Brother page for reference.
ref: http://welcome.solutions.brother.com/bsc/public_s/id/linux/en/instruction_prn3.html ref: http://welcome.solutions.brother.com/bsc/public_s/id/linux/en/instruction_prn1a.html

follow the install instructions for cupsdriver:


# pushd :~/temp/brother-printer/brother-mfc-j430w
# sudo dpkg  -i  --force-all mfcj430wlpr-3.0.0-1.i386.deb

Selecting previously unselected package mfcj430wlpr.
(Reading database ... 169820 files and directories currently installed.)
Unpacking mfcj430wlpr (from mfcj430wlpr-3.0.0-1.i386.deb) ...
Setting up mfcj430wlpr (3.0.0-1) ...

# sudo dpkg  -i  --force-all mfcj430wcupswrapper-3.0.0-1.i386.deb
Selecting previously unselected package mfcj430wcupswrapper.
(Reading database ... 169848 files and directories currently installed.)
Unpacking mfcj430wcupswrapper (from mfcj430wcupswrapper-3.0.0-1.i386.deb) ...
Setting up mfcj430wcupswrapper (3.0.0-1) ...
cups stop/waiting
cups start/running, process 11255
lpadmin -p MFCJ430W -E -v usb://Brother/MFC-J430W?serial=BROG2F131293 -P /usr/share/cups/model/Brother/brother_mfcj430w_printer_en.ppd

check to ensure that drivers are installed:

# dpkg  -l  |  grep  Brother
ii  mfcj430wcupswrapper                        3.0.0-1                                         Brother CUPS Inkjet Printer Definitions
ii  mfcj430wlpr                                3.0.0-1                                         Brother lpr Inkjet Printer Definitions
ii  printer-driver-ptouch                      1.3-3ubuntu0.1                                  printer driver Brother P-touch label printers

 I think P-touch is from my older MFC-201C. I'll let it be for the time-being.


Connect the printer via USB, and wait a few seconds ~30. Check to see if printer is found:


# lsusb
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 002 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
Bus 002 Device 002: ID 046d:c52b Logitech, Inc. Unifying Receiver
Bus 005 Device 002: ID 046e:6000 Behavior Tech. Computer Corp. 
Bus 001 Device 004: ID 04f9:0281 Brother Industries, Ltd

Open up in a new browser window, and point it to your cups configuration page:

http://localhost:631/printers/

If all went well, you should see something like below:


▼ Queue Name ▼ Description Location Make and Model Status
MFCJ430W MFCJ430W  Brother MFC-J430W CUPS Idle

clicking it should take you to the link:
http://localhost:631/printers/MFCJ430W

You should see something like below:

Description: MFCJ430W
Location: 
Driver: Brother MFC-J430W CUPS (color, 2-sided printing)
Connection: usb://Brother/MFC-J430W?serial=BROG2F131293
Defaults: job-sheets=none, none media=na_letter_8.5x11in sides=one-sided
At this point, test that everything is printing correctly. From the "Maintenance" task lists, select "Print test page". I got a nice printer test page.

 

Sharing your printer over SAMBA no need since this is already a network printer. Simply download windows drivers (Mine was Vista 32bit), and add new printers (network), and the rest happens automatically

Scanner installation

insure sane-utils and xsane packages are installed

sudo-apt get install sane-utils
sudo apt-get install xsane
Instructions at: http://welcome.solutions.brother.com/bsc/public_s/id/linux/en/instruction_scn1a.html

Install USB scanner work only with root

 sudo dpkg  -i  --force-all brscan4-0.4.1-2.i386.deb
Selecting previously unselected package brscan4.
(Reading database ... 169852 files and directories currently installed.)
Unpacking brscan4 (from brscan4-0.4.1-2.i386.deb) ...
Setting up brscan4 (0.4.1-2) ...
This software is based in part on the work of the Independent JPEG Group.

check if it is installed and loaded:
dpkg  -l  |  grep  Brother
ii  brscan4                                    0.4.1-2                                         Brother Scanner Driver

test if printing works for root

sudo xsane

Setting up for normal users:
Instructions: http://welcome.solutions.brother.com/bsc/public_s/id/linux/en/instruction_scn1c.html#u9.10


sudo vi /lib/udev/rules.d/40-libsane.rules
Add the following lines indicated by "+". The lines not starting with + are given for context (diff style). Ensure that + is not included in your actual edits:

ENV{libsane_matched}=="yes", RUN+="/bin/setfacl -m g:scanner:rw $env{DEVNAME}"

+# Brother scanners
+ATTRS{idVendor}=="04f9", ENV{libsane_matched}="yes"

LABEL="libsane_rules_end"

save and reboot

sudo reboot