In my case, I wanted to run the latest development version of xmonad. (Queue about an hour of fruitless Googling.)
Eventually, I broke down and yum install fluxbox
‘d to get an
example. After that, a repoquery --list fluxbox
gave a list of the
files installed, and pointed me to /usr/share/xsessions
, which
contains the list of WMs that the Fedora greeter uses to present
options.
Adding a new option is as simple as creating a new desktop file in
that directory, and pointing the Exec
field to the binary of your
WM.
eg:
1 2 3 4 5 6 7 |
|
This post is just testing to see how Octopress works (I’m putting it through it’s paces.)
1 2 3 4 5 6 7 8 9 10 11 |
|
Here’s another code block, using Haskell:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
|
There are more examples of including code here: http://octopress.org/docs/blogging/code/
]]>I recently found the cookbook “Cooking for Geeks” , and it inspired me to build a sous vide (the section on making a sous vide at home is also covered in a blog post by the author here). If you’re not familiar with the term, sous vide lets you very carefully control the temperature you’re cooking at, so you can’t over heat foods.
Sous vide is similar to using a crockpot, but it requires much more control over the temperature than a typical crockpot provides. If cooking with a crockpot is like sketching dinner, then sous vide is analogous to using finely-tuned drafting instruments.
Jeff Potter (Author of Cooking for Geeks) provided part numbers for a suitable thermocouple (a probe thermometer) and a corresponding temp switch (a switch that controls a gate based on the temperature it sees from the attached thermocouple). I would have had some trouble getting the right set-up without that guidance, but still ended up diverging a bit on accident. I ordered an AC version of the temp switch instead of a DC version, but in the end I think it worked better this way.
The biggest problem I have with Jeff’s suggestions is that the book and blog show the temp switch and wiring sitting un-enclosed on the counter, with everything hard-wired together. I’m not comfortable with 115vac running around on my kitchen counter, so I needed some sort of enclosure. I also wanted to be able to detach the thermocouple to easily clean it, and I didn’t want to modify my crockpot much at all. (I really don’t want another appliance, and I still want a functional crockpot.)
Given that I don’t have access to a fab. plant (or a maker bot!) I opted to go with a simple locking tupperware container for a housing. They lock shut, have a gasket for a tight seal, and the plastic is very easy to modify to hold all the ports that the sous vide control needs.
At this point it’s probably reasonable to think of the control as a temperature-controlled power outlet, rather than something specific to a sous vide, since the purpose is really irrelevant for now.
I’m not going to go into great detail about the specific parts (although I’ll talk about the connections somewhat), but you can see the full parts list here.
The pictures show the housing fairly well. We started by cutting out holes for all the components with a hot exacto knife (heated in a blow torch – as Mike put it, it cut through plastic like hot butter through a knife ;). Pretty much all the parts had some sort of template we could go by: - temp switch: we used the wire cover to draw out the opening - power outlet: I had a cover plate that I was intending to use, instead, we just used the cover plate to mark out the large round plug hole and the two screw holes, then used the cover plate screws to hold the plug in. - thermocouple jack: just drilled it out with a drill. - AC input: traced around a pc-style power cord, then cut to the outside of the sharpie line, then drilled out the screw holes (marked after we had the main hole in, and could test-fit the plug).
The temp probe arrived with a bare pair of wires on one end, which wasn’t going to work that well, so we (I was working with a friend who has a soldering iron :) attached a 1/8” headphone plug to the cables, then soldered a corresponding jack to some 22-gauge wire that eventually went to the temp switch inputs (ports 7 & 8 on the TCS-4010).
I was initially going to use a standard pigtail power cable to hook this up to the wall, but when shopping for parts I found a PC-style (D-shaped) power plug, which works beautifully, and it was cheap!
Locating a cheap power plug that would fit in a small container, and only had one plug was actually pretty difficult. The outlets I found with a flange were more expensive than I could justify ($15-20), and everything else had two + plugs, or a plug and a switch, or so on. Eventually, I settled on a 20-amp single-outlet plug, designed for a standard outlet box. In the future, I think I’ll go with a dual-outlet 15amp plug, or look more. The 20-amp outlet is very hard to plug things into, and I’m not entirely sure why. It works just fine once you get everything plugged in, though.
I went with 12-gauge solid copper, in three colors, for the internal wiring. It was quite difficult to form to the right shape, though, and we needed to solder it in place. The wire was so stiff that we ended up re-soldering quite a bit: we’d solder a connection, then bend the wire to make the next connection, and the solder would break. Eventually, we wired it all together and did the soldering in one go. This worked, but it meant we couldn’t easily cover some of the connections with heat-shrink tubing. In particular, the three connections to the AC input were bare, so we covered those with hot glue to keep fingers and other potential conductors from shorting it out (and delivering quite a shock).
We were able to heat-shrink most of the connections, however, and everything is soldered in place, so I think it will hold up :).
I did a test run with eggs to see how this worked, with multiple thermometers to compare readings (you can see one of the other thermometers in the first picture). I did a quick calibration based on those readings, but the controller was still off by about 5 degrees (that initial calibration was pretty drastic, there was a difference of over 10 degrees!). That became evident when I cracked the eggs and they were a bit under-cooked—the book suggested flash-boiling them anyway, so I had boiling water handy and re-calibrated based on that. Don’t do what I did, or if you do, cook something cheap in the first run :)
Cabal-dev does this by creating a per-project sandbox that contains a package database of all the dependencies as well as the project under development. Therefore, it was simple to add support for launching ghci with this package database in place of the user package database. That’s been added in cabal-dev-0.7.3.1, which is available on hackage now, allowing you to do things like this (using my Haskell Version Space Algebra library as an example):
$ cd havsa
$ ls
LICENSE Setup.hs src/ versionspaces.cabal
$ cabal-dev install
Resolving dependencies...
Configuring mtl-1.1.1.1...
Preprocessing library mtl-1.1.1.1...
Building mtl-1.1.1.1...
[ 1 of 21] Compiling Control.Monad.Identity ( Control/Monad/Identity.hs, dist/build/Control/Monad/Identity.o )
....
Registering VersionSpaces-0.0...
Installing library in
/home/creswick/development/havsa/cabal-dev//lib/VersionSpaces-0.0/ghc-6.12.3
Registering VersionSpaces-0.0...
$ cabal-dev ghci
GHCi, version 6.12.3: http://www.haskell.org/ghc/ :? for help
Loading package ghc-prim ... linking ... done.
Loading package integer-gmp ... linking ... done.
Loading package base ... linking ... done.
Loading package ffi-1.0 ... linking ... done.
Prelude> :m + AI.VersionSpaces
Prelude AI.VersionSpaces> showBSR EmptyBSR
Loading package syb-0.1.0.2 ... linking ... done.
Loading package base-3.0.3.2 ... linking ... done.
Loading package mtl-1.1.1.1 ... linking ... done.
Loading package logict-0.4.1 ... linking ... done.
Loading package VersionSpaces-0.0 ... linking ... done.
"Empty"
Prelude AI.VersionSpaces>
This is still far from perfect: you can’t easily load code into the ghci session without exiting, re-running cabal-dev install and cabal-dev ghci, but it’s a good start.
]]>I have been adopting a presentation style that diverges from the traditional bullet-point style promoted by Open Office and PowerPoint (although, PowerPoint 2007 diverges from pure bullets to more interesting shapes, using shading and encapsulation to show hierarchy. It’s a large improvement, but it still falls short of my ideal). Instead of bullets and text, I try to make use of imagery and visual examples whenever possible.
My experience has reinforced what I’ve come across in the literature about balancing your audience’s attention between the content on the wall and your narration. Too much text or detail and you risk loosing your audience because they’re overwhelmed or, if you’re lucky, because they are focusing on the slide content instead of listening to your explanations. It is also much easier to trigger emotional responses with visuals than it is with text, which explains some of the motivation for promotional/motivational presentations that are virtually devoid of text. (Seth Godin’s talks come to mind. One way in which my presentations differ substantially from Seth’s is that I generally talk about fairly technical topics. The difficulties associated with finding high-quality, emotional, images to convey the intricate details of ptrace(2) is not the topic of this post, however.)
A number of years back, I found a mention of using Scalable Vector Graphics (SVGs) for unconstrained presentations and that idea has stuck. I’ve been developing slides in PowerPoint (OpenOffice has not yet reached the point where I can create professional-looking content), but I’ve never been happy with the traditional slide-based presentation style. SVGs promise the ability to move between arbitrary locations, following a pre-defined path from “slide” to “slide”. Furthermore, since the content is scalable, you can literally zoom in to a portion of content to go into more detail, or zoom out to show context. This presentation mode would also make it easy to diverge from your plan based on audience feedback. This can be done with PowerPoint, but it is extremely difficult.
Prezi promises these benefits, so I spent the last few days building a presentation with Prezi at Galois: portable build systems. (The prezi presentation is near the bottom of this post.) The rest of this entry discusses my experiences with Prezi.
I initially found the Prezi interface to be very intuitive. The translation “zebra”, a stripped round mult-function circle, appears whenever you select an object. You then use the zebra to move, scale, rotate, or otherwise manipulate the selected object. Other options are presented through a bubble menu that rotates and scales to show more detailed options as you select sub “bubbles”. It is well worth the few minutes it takes to sign up for a free account and try it out. If you happen to be using 64-bit linux, you probably won’t be able to use the flash app, however. Prezi doesn’t appear to work under that environment. (if you can interact with the presentation embedded on this page, then you should be Ok.) There is also an Adobe Air-based desktop app, which I used extensively.
I was off and brainstorming a presentation mind-map style after only a few minutes playing with the interface. The freedom to create a few words of text, zoom in and flesh out more details, jump back out and pull in an image, all without concern for the layout of final form of the presentation was extremely motivating and liberating.
I was able to be quite productive with Prezi until I began to consider the need for a unifying theme. Two things stood in the way:
None of the pre-defined colors can be changed through the user interface, aside from the eight styles mentioned above, which change all the colors / styles and fonts. You can include arbitrary images, which helps with the limited set of shapes, however, you can only include pdfs if you are using the web-based client, the desktop client does not currently support pdf importing.
I stumbled across a solution to the limited selection of colors by unzipping the .pez file that holds a Prezi presentation on-disk and exploring the contents:
prezi/
├── content.xml
├── preview.png
└── repo/
├── 13177749.png
├── 13177754.jpg
├── 13177758.png
├── 13177835.swf
├── codetree.png
└── Personal_computer,_exploded_5.png
content.xml defines the SVG-like presentation content, and it ends with a set of css styles:
The colors in these styles can be adjusted, and you can even add new styles here (although you will need to manually insert them into the xml where you wish to use the new styles). After editing the content.xml, zip up the presentation tree, taking care to maintain the correct hierarchy and no compression:
$ ls
prezi/
$ zip -r -0 enabling-portable-build-systems-biuinv2vus9x.pez prezi/
One surprising benefit is that the updated styles are actually adopted by the UI widgets in the Prezi application, once you load a modified .pez file.
These changes also persist across saves, loads, uploading to Prezi.com, and they appear to render properly when embedded, granting quite a lot of power, if you’re able and willing to work with xml and css periodically.
Editing content.xml also proved to be the best way to spellcheck the presentation content, although the text that is actually displayed is in CDATA nodes, which your editor may skip over when running a spell checker. Thankfully, the text is duplicated as plain text nodes, so you are still alerted to spelling errors when running ispell-buffer in emacs. You can then fix the CDATA entry with a recursive edit. (I suspect that the duplication is there to simplify text searching.)
I’m rather satisfied with these workarounds: The ugly aspects could be automated with some simple tools to update the content.xml as needed, and the hacks I found worked surprisingly well.
Unfortunately, I don’t think Prezi is ready for prime-time, despite my success with css styles and spellchecking. There are simply no facilities to precisely align or distribute objects with respect to each-other. Further complicating this is the lack of a “group” option to create aggregate objects. You can select multiple objects by dragging a rectangle if you hold shift, but that isn’t possible if the objects are on top of other objects – the first click of a shift-drag must occur on the background of the Prezi, or you will simply select the lower object. While this sounds a bit like a minor quibble, it is impossible to accurately position complex sets of objects if they are layered on top of other content (such as a background image). I often need to add or remove “slides”, and with Prezi, that can include a lot of object translations to provide or absorb space while fitting with the high-level overview of your presentation. Without alignment tools, you also take the risk that a title will display askew with respect to the screen borders when you are mid-presentation.
I eventually adopted the following practice to help keep content square:
Now, never rotate any individual components of that slide again. Use shift-clicking to select everything in the slide each time you need to rotate or move the slide. If you need to add new content, first enter ‘show’ mode and click on the frame to make the camera rotate properly with respect to the slide.
I’m excited to see how Prezi evolves, and I will be one of the first in line once the selection / alignment problems are fixed. I hope that Prezi will motivate other implementations with similar capabilities, there is plenty of room for some healthy competition.
]]>Without further ado, here’s a short list of brainteasers (none are of my creation, and I do not have citations–if you know who to credit for any of these, please let me know and I will add proper attribution info.)
Calendar Cubes
A man has two cubes on his desk. Each face of each cube has a single-digit number wirtten on it. With these two cubes, the man is able to enumerate all the days in any month, and each morning he arranges the cubes so that the number of the current day is on top, always using both cubes. How are the numbers distributed on the cubes?
Pennies 1
You are blindfolded in a room with 100 pennies. 30 of the pennies are heads-up, the remainder are tails-up. You can interact with the pennies in any way, but your fingers are not dexterous enough to feel the contours of the coins (so you can’t feel one to see which side is heads, or tails). Since you are blindfolded, you can’t see them either. Your task is to manipulate the coins such that there are two sets and each set has an equal number of coins that are heads-up. (The sets must be disjoint, non-empty, and all pennies must be in one of the two sets.)
Pennies 2
Given N pennies, one of which is counterfiet (and therefore is of different weight from the remainder) and a balance, how can you find the counterfeit coin in less three weighings on the balance.
Eggs
You are in a 100-floor building on a planet with oddly low gravity and/or surprisingly durable eggs. You happen to have two of these eggs (unfertilized, I assure you). Your task is to find the highest floor from which you can drop an egg and have it remain intact.
Numbers
Given 99 unique integers between 1 and 100, provide an optimal algorithm to find the remaining integer in that range that is not in the set.
Hint: Bcgvzny gnxrf yvarne gvzr naq pbafgnag fcnpr.
Prisoners
50 people are inprisioned, and during their imprisonment the captor will invite people randomly in to visit with her. All visits are one-on-one, and each prisoner has a unique tunnel from their cell to the captor’s office (so you can’t look out your cell and see who is going in). In the captor’s room is a bowl that the prisoners can optionally turn over, or turn right-side up during their visit(s). The initial state of the bowl is known to everyone.
The imprisonment may last for an infinite period of time, during which each prisoner will be invited into the captor’s office many, many times (essentially infinite, but it needs not be infinite, it could just be a reasonably small number in the optimal case). The imprisonment ends when one prisoner says: “Everyone has been in to see the captor at least once.” If a prisoner says this and they are wrong, all prisoners are killed immediately. Because the captor may decide not to visit anyone for a while, it is as if the prisoners have no concept of time, so they can’t bound the number of people seen based on the passage of time.
To give the prisoners a chance, they are allowed to convene briefly before their imprisonment, during which time they can plan a strategy. How do they do it?
Sequences What is the next line in this sequence?
1 1 1 2 1 1 1 1 2 3 1 1 2 2 1 1 2 1 3]]>
Problem: I need a schema for FooTask
Solution:
valid-foo.xml
or invalid-bar.xml
(I use numbers for foo and bar).foo.xsd
[cc lang=”bash”] XSD=foo.xsd
test:
@for file in `ls -1 tests/valid*.xml`; do if xmlstarlet val -q --xsd ${XSD} $${file}; then echo "pass"; else echo "fail: $${file}"; fi; done
@for file in `ls -1 tests/invalid*.xml`; do if ! xmlstarlet val -q --xsd ${XSD} $${file}; then echo "pass"; else echo "fail: $${file}"; fi; done
[/cc]
Now, run make, and if anything fails you can manually run xmlstarlet val -e --xsd foo.xsh [failing file.xml]
to see the details.
I make no claims to being a cryptographer, but I did have a number of questions about the practical viability of this approach. Now, there are many questions in that vein that are directed at the performance characteristics of Gentry’s approach (which are abysmal, but not asymptotically so). I was curious about The use of side effects to discern information about the encrypted content.
For example, anyone who has used a debugger knows that you can monitor the flow of a program that has been instrumented with debugging symbols, and you can learn a great deal about the input even without examining the content of variables. If a given conditional branch directs execution one way, then you know the predicate evaluated to a specific value. I set out to determine why this sort of attack is not a problem, and I ended up learning a lot about the way programs that run on encrypted data must operate.
Let’s take a moment to quickly discuss homomorphisms, and homomorphic encryption.
a homomorphism is a structure-preserving map between two algebraic structures –Wikipedia
In this case (encryption) the homomorphism is a mapping from the clear text and the cypher-text. Fully homomorphic encryption, as Gentry discovered, preserves addition and multiplication–meaning that you can add and multiply cyphertext, and the result can be decrypted to reveal clear text that has been added and multiplied in the same way. Addition and Multiplication provide the operations necessary to implement boolean logic, and therefore, are sufficient to program very complex transformations (I’m not certain that it is safe to say “arbitrarily complex”).
It’s important to realize that every addition or multiplication operation results in a value that is encrypted. The running program can not know the intermediate results, and indeed it does not.
Edward Kmett posted the conversion from if/then/else to addition/multiplication on Schneier’s blog:
[cc lang=”java”] if (condition) { return then_clause; } else { return else_clause; } [/cc]
becomes:
[cc lang=”java”] return condition * then_clause + (1-condition) * else_clause; [/cc]
Here’s a “real” example (it compiles, at least) using both approaches. This is just meant to be used for explanation – compilers could easily do the translation from the code in is0_clear() to is0_enc(). I’ve written them out separately here so we can look at the generated bytecode. [cc lang=”java”] public class Test { public int is0_clear(int input) {
if (0==input){
return 2;
} else {
return 3;
}
}
public int is0_enc(int input) {
// I'm cheating a bit to keep this simple -- calculate
// the conditional to be either 0 or 1:
int cond = 0==input ? 1 : 0;
return cond * 2 + (1-cond) * 3;
} } [/cc]
And here’s the bytecode (generated by sun-java-6, and output with javap -verbose).
[cc lang=”asm”] public int is0_clear(int); Code: Stack=2, Locals=2, Args_size=2 0: iconst_0 1: iload_1 2: if_icmpne 7 // Conditional Jump!! 5: iconst_2 6: ireturn // return a constant 2 7: iconst_3 8: ireturn // return a constant 3
public int is0_enc(int);
Code:
Stack=3, Locals=3, Args_size=2
0: iconst_0 // lines 0-9 here are for the “cheating” part
1: iload_1 // just ignore them – the arithmetic to accomplish
2: if_icmpne 9 // the same thing is complex, and not important.
5: iconst_1
6: goto 10
9: iconst_0
10: istore_2 // note that there are no conditional jumps below here:
11: iload_2
12: iconst_2
13: imul
14: iconst_1
15: iload_2
16: isub
17: iconst_3
18: imul
19: iadd
20: ireturn // return the result of the calculated expression.
[/cc]
Since every operation results in an unknown value, no conditional branches can be taken! Every branch has to be evaluated, and the correct result of the ‘correct’ branch is selected by multiplying by a binary value, that is itself, encrypted! This means that things like run-time short-circuit evaluation are not possible, monitoring progam flow is meaningless, (possibly?) every input will result in the same run-time, and all side-effects will happen regardless of the input.
Going further down this rabbit hole, caching is impossible, and global state (if even posible) is likely to be extremely dangerous. I shudder to think of how Python’s concept of scoping would interact with a compiler that generates code for homomorphicly encrypted input.
Aside from the pure overhead of dealing with encrypted data, and the “refreshing” required with Gentry’s algorithm, I think that there are going to be some serious performance and development concerns once homomorphic encryption becomes a reality. The programming practices that are common in languages like Java and Python now are not likely to hold up. I expect that the APIs that enable operation on encrypted data will be based on total functions, and I have only begun to think about the implications for testing, code coverage, and quality assurance.
]]>I’m done with firefox – Opera 10 now plays flash well, has adblock via. urifilters, a cleaner UI (no menubar, a menu button!) vertical tabs are supported natively, etc… I don’t really like the widget toolkit used in the file open/save dialog, but that’s much better than the horrid performance/stability/bizarre bugs of Firefox.
The minimal UI possible with Opera is also a major win in my book.
]]>The following error had me stumped for a few days: [cc lang=”bash”] [INFO] Error retrieving previous build number for artifact ‘de.balokb:libreadline-java-i386-Linux-c23cxx6:jar’: repository metadata for: ‘snapshot de.balokb:libreadline-java-i386-Linux-c23cxx6:1.0-SNAPSHOT’ could not be retrieved from repository: inhouse_snapshot due to an error: Exit code: 1 - Host key verification failed. [/cc]
All the googling I did turned up people stumped with ssh public key problems, or users who had specified ssh: instead of extssh: … etc. It was fairly quick to elleminate those issues, or so I thought. (ssh localhost
right? No problem.)
I happened to look in more detail at my pom.xml: [cc lang=”xml”]
<repository>
<id>inhouse</id>
<name>Inhouse Internal Release Repository</name>
<url>scpexe://10.0.0.26/var/www/maven/inhouse</url>
</repository>
[/cc]
hm… 10.0.0.26
I wonder…
[cc] $ ssh 10.0.0.26 The authenticity of host ‘10.0.0.26 (10.0.0.26)’ can’t be established. RSA key fingerprint is a7:bf:36:4c:b9:c7:c2:f9:03:9a:3a:a7:4f:10:e5:ba. Are you sure you want to continue connecting (yes/no)? [/cc]
Ah ha! I clearly can’t use a pom.xml that lists “localhost” in the server section – I’d only be able to push from one place. However, since I’d never ssh’d to 10.0.0.26
from localhost, the fingerprint was unknown, and that was causing maven to error out with the problem I saw initially.
“Fingerprint ID failed” would have been a nicer error message, but I don’tk now that that is possible.
]]>Today, (and yesterday, and a good portion of the night in-between) I ran into a nasty bug in a library that I didn’t know my code depended on. It isn’t particularly important what I was working on, but just for context: I needed to strip a lot of text content out of nodes in the complete wikipedia revision history dump, so I was using Sax to parse the xml stream, filter out the stuff I wanted filtered out, and save the stuff that, well, I wanted saved. Being that the input was all of wikipedia, there were a fair number of unicode characters in there. As it turns out, the 2.6.2 xercesImpl has some sort of bug that allows xml with certain characters to be read without throwing exceptions, but when you try to write the chars that were actually read, you end up trying to write characters that aren’t valid in xml. Even if I’d known that in advance, my response would have been something like “ok, so what? I’m not using xercesImpl, and certainly not a version that old”.
Well.
You see, in addition to using Maven, I’ve also been using the Google Collections and JSR305 libraries, so I just drop those <dependency>
entries into the pom for all my new projects–I just assume that I’ll need them, and I usually do.
Unfortunately, JSR305 1.3.8 depends on jaxen 1.1.1, which depends on xercesImpl 2.6.2 (jaxen also needs this dependency via xom 1.0, for what that’s worth).
Because that dependency was already present in my build path (via mvn eclipse:eclipse
) and in the generated jar (via <addClasspath>
and <classpathPrefix>
in the maven-jar-plugin
configuration section), I never realized that my sax code actually had a direct dependency on xerces as well. This all came to a head when, 3.53gb into my 2.8tb run, these rather unhelpful exceptions started popping up:
[cc lang=”bash”] java.io.IOException: The character ‘?’ is an invalid XML character
at org.apache.xml.serialize.BaseMarkupSerializer.characters(Unknown
Source)
at com.stottlerhenke.tools.wikiparse.ContentStripper.characters(ContentStripper.java:195)
at org.apache.xerces.parsers.AbstractSAXParser.characters(Unknown
Source)
at org.apache.xerces.impl.XMLDocumentFragmentScannerImpl$FragmentContentDispatcher.dispatch(Unknown
Source)
at org.apache.xerces.impl.XMLDocumentFragmentScannerImpl.scanDocument(Unknown
Source)
at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
at org.apache.xerces.parsers.XMLParser.parse(Unknown Source)
at org.apache.xerces.parsers.AbstractSAXParser.parse(Unknown Source)
at com.stottlerhenke.tools.wikiparse.ContentStripper.parse(ContentStripper.java:96)
at com.stottlerhenke.tools.wikiparse.ContentStripper.main(ContentStripper.java:379)
[/cc]
<rant>
“?” is not unicode – it fits just fine in asci tables everywhere – so please don’t tell me that it’s an invalid unicode character :) (0xd800 is an invalid unicode character, and that would have been much more helpful) </rant>
Many hours later I was able to find a sample of the actual input that was causing these problems, and I was able to reproduce the issue with an input slightly smaller than 2.8tb. Once that was done, I set out to make a minimal test case. Rather than bother with a new maven project, I just hacked it out in emacs (not using google collections, etc. because, clearly, I wanted it minimal). To my surprise, everything worked, and worked fantastically! But how? I didn’t even supply an xml api on the classpath, yet it ran just fine!
In truth, I did supply an xml api – xercesImpl.jar, and many other libraries – via my environment’s $CLASSPATH
. (Figuring that out was another adventure, but I digress.) Once it became clear that I was indeed using a broken library it was simply a matter of explicitly specifying the dependency on a new version of xercesImpl, and rebuilding.
The moral?
Know your dependencies! This should come along with knowing your language’s built-in APIs well. It wasn’t clear to me that the SAX packages I was using were not part of the core java API, so it didn’t strike me as odd that I didn’t need to specify a classpath entry or a pom dependency before I could use sax.
If you suspect something strange, you can see the dependency tree in the generated html documentation you get when running mvn site
.
gnome-control-center can be used to fix this, but it requires that the gnome-settings-daemon be running, which forces it’s opinions on many other aspects of my environment (I run Enlightenment dr17).
Poking around a bit, and help from #e on freenode, revealed that xset
can be used to fix the key repeat settings.
[cc lang=”bash”]
$ xset q Keyboard Control: auto repeat: on key click percent: 0 LED mask: 00000000 auto repeat delay: 660 repeat rate: 25 auto repeating keys: 00ffffffdffffbbf
fadfffefffedffff
9fffffffffffffff
fff7ffffffffffff
$ xset r rate 250 40 [/cc]
]]>Brewed, Bottled, Cultured and Sweetened is a blog about beer, coffee, wine, cheese, chocolate, etc… that I’m writing with an old friend from Dagit.o
]]>There is an annoying bug in the sequence of code that manages the wacom rotation / sleep / resume and stylus calibration right now. (Where “right now” is Ubuntu Intrepid, with the 0.8.2-1 wacom drivers.)
This is a document bug over at the ubuntu launchpad, and the poster there does a fine job of describing the intricacies of reproducing the bug, so I’ll only give a brief explanation here to help get indexed.
If you rotate the screen any amount, even returning to the original rotation, and then sleep the machine, when it wakes up, the stylus will not be calibrated properly – the cursor will be off to the side of the stylus point. It doesn’t seem to matter how it was calibrated when the machine slept, nor does it matter what rotation you’re in when you put the machine to sleep.
There is one straightforward workaround: When you wake the machine, run wacomcpl, click on stylus, click calibrate (the mouse should now be under the stylus again), and exit wacomcpl. This is incredibly cumbersome, but at least it’s better than restarting X, which is what I have been doing.
Further inspection (based largely on the thread of comments on that launchpad bug) reveals that the problem is actually related to bad values for the TopX, TopY, BottomX and BottomY settings on the wacom devices after a resume. By resetting these to their proper values for the current rotation, we can reestablish the proper calibration. First off, we need to know the proper values, and the easiest way to get them is with xsetwacom
:
[cc lang=”bash”]
echo “TopX=” xsetwacom get stylus TopX
echo “TopY=” xsetwacom get stylus TopY
echo “BottomX=” xsetwacom get stylus BottomX
echo “BottomY=” xsetwacom get stylus BottomY
[/cc]
Now, we’ll run this for each rotation, and save the results. You should end up with something like the following:
[cc lang=”bash”] |rogue on bach |AC 70% | @ 00:02:26 ~| $ xrotate 1 && wacomSettings xrandr to left, xsetwacom to 2 TopX= -46 TopY= -3 BottomX= 18605 BottomY= 24518 |rogue on bach |AC 70% | @ 00:02:28 ~| $ xrotate 2 && wacomSettings xrandr to inverted, xsetwacom to 3 TopX= 58 TopY= -46 BottomX= 24579 BottomY= 18605 |rogue on bach |AC 70% | @ 00:02:35 ~| $ xrotate 3 && wacomSettings xrandr to right, xsetwacom to 1 TopX= -173 TopY= 58 BottomX= 18478 BottomY= 24579 |rogue on bach |AC 70% | @ 00:02:41 ~| $ xrotate 0 && wacomSettings xrandr to normal, xsetwacom to 0 TopX= -3 TopY= -173 BottomX= 24518 BottomY= 18478 [/cc]
(Note that my bash prompt looks like & command lines above are indented, and the output is left-aligned)
That gives us enough information to script the calibration when we resume. For example, when resuming to a “normal” rotation, I need to run:
[cc lang=”bash”] xsetwacom set stylus TopX -3 xsetwacom set stylus TopY -173 xsetwacom set stylus BottomX 24518 xsetwacom set stylus BottomY 18478 [/cc] (Wrap that in a bash script and give it a shot!)
Here’s the full script that gets the current orientation and then calibrates the common wacom devices:
[cc lang=”bash”]
#
#
LOG=/home/rogue/calibration.out XSETWACOM=/usr/local/bin/xsetwacom
#
#
# function calibrate {
${XSETWACOM} --display :0.0 set stylus TopX $1 >> ${LOG} 2>&1
${XSETWACOM} --display :0.0 set stylus TopY $2 >> ${LOG} 2>&1
${XSETWACOM} --display :0.0 set stylus BottomX $3 >> ${LOG} 2>&1
${XSETWACOM} --display :0.0 set stylus BottomY $4 >> ${LOG} 2>&1
${XSETWACOM} --display :0.0 set eraser TopX $1 >> ${LOG} 2>&1
${XSETWACOM} --display :0.0 set eraser TopY $2 >> ${LOG} 2>&1
${XSETWACOM} --display :0.0 set eraser BottomX $3 >> ${LOG} 2>&1
${XSETWACOM} --display :0.0 set eraser BottomY $4 >> ${LOG} 2>&1
${XSETWACOM} --display :0.0 set cursor TopX $1 >> ${LOG} 2>&1
${XSETWACOM} --display :0.0 set cursor TopY $2 >> ${LOG} 2>&1
${XSETWACOM} --display :0.0 set cursor BottomX $3 >> ${LOG} 2>&1
${XSETWACOM} --display :0.0 set cursor BottomY $4 >> ${LOG} 2>&1
}
function fixCalibration {
# get the current orientation:
ORIENTATION=`xrandr --verbose --query | grep " connected" | awk '{print $5}'`
echo "Orientation: ${ORIENTATION}" >> ${LOG}
case "${ORIENTATION}" in
normal)
calibrate -3 -173 24518 18478
;;
left)
calibrate -46 -3 18605 24518
;;
right)
calibrate -173 58 18478 24579
;;
inverted)
calibrate 58 -46 24579 18605
;;
*)
calibrate -3 -173 24518 18478
echo "ERROR!! unknown orientation! ${ORIENTATION}" >> ${LOG}
;;
esac
}
case “$1” in
resume|thaw)
date >> ${LOG}
fixCalibration
whoami >> ${LOG}
;;
*)
echo "not a resum|thaw event: $1" >> ${LOG}
;;
esac [/cc]
Stick that in /etc/pm/sleep.d/40wacomCalibrate
(or some similarly named file), make it executable by all (chmod a+x /etc/pm/sleep.d/40wacomCalibrate
) and it should be run when the system resumes.
Update: I found that the logging of the old script didn’t work, so I’ve updated the script to reflect that. There were also some problems with how I was testing the first script, and the actions I was taking didn’t actually trigger the bug. (The bug seems to be quite state-dependent, and markovian assumption was wrong.) To get this to work, root will need to have access to the display that xsetwacom uses. The simplest way to do this is to add xhost +
to you x startup. (I put it in my ~/.xsession just before exec enlightenment-start
).
The tablet screen is a wacom digitizer with a pen that has two buttons (eraser and a finger button), and the tablet can differentiate between touching and hovering. The linux wacom driver & tools are necessary to get this all working. While I didn’t find a single page with instructions that worked flawlessly, I was able to figure it out from a collection of links:
First off, you will need the latest version of the linux Wacom driver (8.2.1 at the time of this writing). The driver versions seem to be tied to your kernel versions, so this is quite important. The wacom-tools package that comes with Ubuntu is not sufficient (in fact, you’ll want to uninstall it if you have it already).
Once you have the wacom package downloaded, follow the directions for installing it in the howto (linked above). The wacom package uses a typical configure, make, make install process but there are a few kinks:
uname -r
/kernel/drivers/usb/input/ directory manually (creating subdirs if necessary), before running make install. (This is outlined in the mini-howto.)Once wacom is installed, you can begin working with the X.org configuration. This is fairly clearly explained at the aliencam blog linked above, or you can use my xorg.conf here.
[cc lang=”bash”] Section “Device”
Identifier "Configured Video Device"
EndSection
Section “Monitor”
Identifier "Configured Monitor"
EndSection
Section “Screen”
Identifier "Default Screen"
Monitor "Configured Monitor"
Device "Configured Video Device"
EndSection
Section “InputDevice”
Driver "wacom"
Identifier "stylus"
Option "Device" "/dev/ttyS0" # serial ONLY
Option "Type" "stylus"
Option "ForceDevice" "ISDV4" # Tablet PC ONLY
Option "Button2" "3"
EndSection
Section “InputDevice”
Driver "wacom"
Identifier "eraser"
Option "Device" "/dev/ttyS0" # serial ONLY
Option "Type" "eraser"
Option "ForceDevice" "ISDV4" # Tablet PC ONLY
Option "Button3" "2"
EndSection
Section “InputDevice”
Driver "wacom"
Identifier "cursor"
Option "Device" "/dev/ttyS0" # serial ONLY
Option "Type" "cursor"
Option "ForceDevice" "ISDV4" # Tablet PC ONLY
EndSection
Section “ServerLayout”
Identifier "Default Layout"
Screen "Default Screen"
InputDevice "stylus" "SendCoreEvents"
InputDevice "cursor" "SendCoreEvents"
InputDevice "eraser" "SendCoreEvents"
EndSection [/cc]
After doing that, you should be able to reboot and the pen should be working. You can do things like configure the buttons with xsetwacom
(and you’ll need that when it comes time to rotate the screen), but I kept getting this error when I tried to run xsetwacom
:
[cc lang=”bash”] $ xsetwacom xsetwacom: error while loading shared libraries: libwacomcfg.so.0: cannot open shared object file: no such file or directory. [/cc]
I made a lucky guess, and fixed the problem with a quick ldconfig:
[cc lang=”bash”] $ sudo ldconfig # that was a lucky guess. [/cc]
Update: There were some issues with the wacom calibration after a sleep/resume cycle if the laptop screen had been rotated during that prior wake cycle (this happens a lot more than it seems, given how complex that description is.) I’ve written up a workaround here.
]]>First off, some specs:
I’ll flesh that list out more as I can find the details (eg: wireless chipset, video, etc..)
First off, I blew some time poking around in Vista of course :). The handwriting input app is phenomenal in a lot of ways. It works very well, training is well integrated, and it has worked with every input area I’ve tried. It could be better if it had contextual clues, and could tie into things like Eclipse’s intellisense. Overall, though, it is amazing how simple it is to use, and how aesthetically pleasing the handwriting actually is. There is a lot to be said for using a couple extra pixels to make the strokes taper off as you pull the pen away. It has QWAN.
That done, I started to move on to installing Linux. I’m giving Ubuntu 8.10 the first chance, and I thought I’d try using a USB-based install so I wouldn’t have to monkey around with the Ultrabase & drive. If you have an 8.10 system already, you can easily create a bootable usb ubuntu drive with usb-creator
and an ubuntu iso. This takes perhaps 45min - 1 hour.
Booting was as simple as going into the ibm bios-like page (by hitting the ThinkVantage button on boot) and telling it to boot from another device, then selecting the usb drive (that I had already inserted). I split the existing 200gb partition in two with the ubuntu installer, keeping Vista in it’s 100 gig sandbox, and leaving the remaining ~100 gigs for Ubuntu to partition further (which it did, as two partitions: one for / and one for swap. /dev/sda5 and /dev/sda6).
I do wish it had said how much space was being allocated to each of those partitions though. The installer didn’t give any indication.
Installation from booting the installer from usb to booting into the installed system took right about 30min. I’m impressed :)
Out of the box:
More information as I figure it out :)
]]>A couple google searches later turned up this link:
http://dev.eclipse.org/newslists/news.eclipse.platform/msg62159.html
The poster in that thread had the same problem (back in Feb. 2007), and found the answer, but none of the content in that thread makes it trivial to locate the answer again.
The responder (with the answer) simply included a link to another mailing list:
http://dev.eclipse.org/mhonarc/lists/cross-project-issues-dev/maillist.html
Notice that that page is not constant. Today, it shows the most recent posts as of October 31st, 2008. In order to figure out what had happened to startup.jar, I had to take into account the OP’s response (“Ok so this is very recent.”), the timestamp on the messages (Mon, 12 Feb 2007) and then navigate the mailing list archives to find that time period, and start reading.
Please don’t put people through this sort of crap. It’s generally not difficult to find permlinks to a given email, or include a quick note with the actual answer. I have the answer now (startup.jar was replaced with org.eclipse.equinox.launcher in 3.3), but there is no way that I can tie that answer to the conversation I’ve linked to above.
For the purposes of Google:
If you’re having this problem:
I’m trying to do some automation, but I’m running into a problem with the 3.3 integration build.
java -cp plugins\org.eclipse.platform_3.2.100.v20070126\startup.jar org.eclipse.core.launcher.Main
doesn’t do anything. It doesn’t say anything. The only information I’m getting is an exit status of 13.
Then you need to use “java -jar plugins/org.eclipse.equinox.launcher_1.0.0.v20070207.jar” (adjusting the version numbers for your installation).
]]>I’ve been digging into OSGi a bit over the last week or so inorder to create some Eclipse plugins that will automatically discover eachother, and I’ve been generally impressed with the framework on the whole. The documentation is a bit lacking, but there are some good blog posts about it. (Specifically Neil Bartlett’s introduction to OSGi.)
One thing that bugged me is the repetition needed when you implement the CommandProvider interface to add commands to the OSGi console. CommandProvider defines one method you must supply:
[cc lang=”java”]
public String getHelp()
[/cc]
OSGi then uses reflection to extract each of the methods that starts
with an underscore, and supplies those methods to the command environment as
new commands. (The underscore is trimmed, and the name of the method becomes
the command name.) General practice is to include the name of the
method in the return value of getHelp()
, along with a description of
what the method does, eg:
[cc lang=”java”] public class SampleCommandProvider implements CommandProvider {
public synchronized void _run(CommandInterpreter ci) {
// do stuff.
}
public String getHelp() {
return "\trun - execute a Runnable service";
} }[/cc]
This seems like a pain to maintain, so I took a quick look at annotations, and propose a new syntax:
[cc lang=”java”] public class SampleCommandProvider extends DescriptiveCommandProvider {
@CmdDescr(description=”execute a Runnable service”) public synchronized void _run(CommandInterpreter ci) {
// do stuff.
} }[/cc]
Here we’ve extracted the getHelp()
method into a new superclass, so
our SampleCommandProvider now extends an abstract class instead of
implementing an interface. It also makes use of an Annotation, which
we need to define:
[cc lang=”java”] import java.lang.annotation.ElementType; import java.lang.annotation.Retention; import java.lang.annotation.RetentionPolicy; import java.lang.annotation.Target;
@Retention(RetentionPolicy.RUNTIME) @Target(ElementType.METHOD) public @interface CmdDescr { String description(); }[/cc]
Finally, we just need to define the superclass that implements
getHelp()
:
[cc lang=”java”] import java.lang.reflect.Method; import java.util.regex.Matcher; import java.util.regex.Pattern;
import org.eclipse.osgi.framework.console.CommandProvider;
public abstract class DescriptiveCommandProvider implements CommandProvider {
private static final Pattern CMD_PATTERN = Pattern.compile(”_(.*)”); private String help = null;
public String getHelp() {
if (null == help){
help = buildHelp();
}
return help;
}
private String buildHelp() {
StringBuilder helpBuff = new StringBuilder();
for (Method m : this.getClass().getMethods()){
if (methodIsCmd(m)){
if (0 != helpBuff.length()){
helpBuff.append("\n");
}
helpBuff.append(getDocumentation(m));
}
}
return helpBuff.toString();
}
private boolean methodIsCmd(Method m) {
return CMD_PATTERN.matcher(m.getName()).matches();
}
private String getDocumentation(Method m) {
StringBuilder methodHelp = new StringBuilder();
Matcher matcher = CMD_PATTERN.matcher(m.getName());
if(matcher.matches()){
methodHelp.append("\t"+matcher.group(1));
CmdDescr description = m.getAnnotation(CmdDescr.class);
if (null != description){
methodHelp.append(" - "+description.description());
}
}
return methodHelp.toString();
} } [/cc]
Note that the actual reflection on the class only happens once – all
subsequent calls to getHelp()
use a cached copy of the documentation.
* It’s a power strip
* It’s a network hub
* It’s a USB hub
* You clamp it onto the back of any desk
The idea being that:
This would make it easy to plug in laptops, USB peripherals, and all your rechargers at your desk without crawling around on the floor.
He links to a device that does some of this, and runs ~$150/device. At that price, I think a better solution is a docking station–when you get down to it, I don’t want to plug in 4 things every time I sit down even if it doesn’t involve crawling under the desk (power, video, usb, ethernet and possibly audio). I think it’s unlikely that all the features needed above are really necessary when you just show up for a meeting, or hop over to your coworker’s office for a short hacking session. Many conference rooms these days already have tables wired for ethernet / power and svga video to a projector.
]]>