tag:blogger.com,1999:blog-85589900299289038272024-03-08T12:10:27.649-05:00Watching the Dandelions GrowComments from out of right field on computing, Christianity, and other fancy stuffPaul A. Claytonhttp://www.blogger.com/profile/04389349203483308643noreply@blogger.comBlogger16125tag:blogger.com,1999:blog-8558990029928903827.post-30601425532626627772022-05-01T11:11:00.000-04:002022-05-01T11:11:00.752-04:00An ortholinear split keyboard design<p>I have spent some time developing a keyboard concept; I am proud enough of the design to want to share it even though I seem unlikely to actually implement it.</p>
<img border="0" style="width: 100%" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiBSacWdnSIDbCT7PZEW4ASQ_hfcHUNzum1E4UBzBWhDqaam-KdKpuAhXngwrkOEejl7CclD_EhUHkQAo3mPf22GZb3kdGutwipg9jf4q-h9FvCTTQENwGVPH2w0EvlvfBEzYf-wbMTdwemVxZS8NVa1bBHQ_iMTSBocnv9TGfNShJrYNBWFCu4a4iK/w663-h226/keyboard-A.png"/><h3> Layout and Rationales</h3>
<p>The general design is inspired by split keyboard designs such as the <a href="https://mattgemmell.com/the-corne-keyboard/">Corne keyboard described in one of Matt Gemmell's blog posts</a>. While the orthocolumnar key positions and angled thumb clusters are clearly more ergonomic (the fingers' rest positions are not arranged in a line and the thumb moves at an angle rather than up and down; the natural reach for the index finger is also lower than the rest position), I chose an ortholinear layout as an aesthetic choice with some expectation of psychologically easier adaptation and possibly some manufacturing advantages. The design also differs substantially from the Corne in having: six thumb keys (vs. three), an extra column for the little finger, four extra keys on each of the thumb upper rows, and eleven hard-to-reach keys (yellow in the image). A little bit of experimentation using a number pad gives me some confidence that the extra keys (excluding the difficult eleven of each hand) are not unreasonable, though such gives almost no indication of actual discomfort from extended use. The extra thumb keys in particular seem important to have all five modifier keys (shift, alt, layer, system, and — with the index finger — ctrl) available for each hand; even if holding the modifier key while typing was not required (e.g., tap setting the modifier for the next keystroke with another method to lock and unlock a modifier), there may be some advantage for hand use balance and possibly ease of learning. Having space (tab when shifted), backspace (backtab when shifted), enter, and delete as thumb keys is similar to Matt Gemmell's use for the Corne — since his layout has shift only on the same thumb cluster with space and backspace, he cannot use the mnemonic of emphatic/shifted space and backspace for tab and backtab (I am a little concerned about mapping backtab to a shifted destructive key, but undo functionality is common).</p>
<p>The letter placements are clearly inspired by Dvorak and Colemak, but the extra little finger column on each hand allows the four least used letters (z,x,q,j) to be moved to make room for ctrl and one symbol key on each hand as well as another distant (but still comfortable for the little finger) symbol key and action key (pause, menu) for each hand.</p>
<p>I am rather proud of the symmetry of the layout. The enclosure marks are in the same position for each hand; «,?» and «.!», «\`» and «/%», «-_» and «+|» are in the same position for opposite hands; and for the base layer letter or symbol positions are the same on each hand. The letter placement is largely guided by frequency and having vowels on the right hand (I am guessing that having the vowels on one hand may also help in the learning process). The placement of F and P in the same row and P and B in the same column may help in learning because of the sound association; U and V have a visual similarity (while the separation may reduce mistakes); M being adjacent to N seems likely to be helpful. The shifted symbol mappings also has significant mnemonic aspects; '$' is associated with numbers (#), '+' with or (|), '*' with and (&), ',' with '?' (pause), '.' with '!'. The more angular enclosure marks ('<>' and '{}') are shifted from the visually closer enclosure marks ('[]' and '()').</p>
<p>The page up (🡅) and page down (🡇) keys may be frequently enough used to justify being accessed by the index finger. The arrow keys are in standard positions but for the left hand. The home (⇤) and end (⇥) keys are on the main row and have positional mnemonics (⇤ is the leftmost left hand home position and ⇥ is a rightward reach for the index finger) as well as letter mnemonics ("home" begins with 'H' and "end" ends with 'D'). I might have preferred using shift arrow for the more "emphatic" movements, but shift-navigation-key is used for text selection.</p>
<p>Many keys in the number/navigation layer do not have any special assignments. Keeping '+', '-', '*', '/', '.' and '=' in their base position would provide all the keys associted with a typical numberpad in that layer without having multiple positions for a key (depending on the layer). (Keeping '(' and ')' would provide a reasonable calculator interface. And having capital A-F in the same shifted positions would provide hexadecimal number entering in that layer with only the requirement of shifting.) Even with so much key "transparency" between base and num/nav layers, there would still be many keys available for new uses; however, I have not been struck by firm inspiration for uses. (Page/document set "start/top", "end/bottom", "previous", "next", and "up" could be useful — "down" is less useful because in a tree organization down is not specific, though it could mean "next downward". History navigation is also different from text navigation and document set navigation.)</p>
<p>I am hoping that these layout factors will significantly facilitate learning; quoting Matt Gemmell: "When moving to a new and thus unfamiliar layout, remember there are other things to capitalise on: consistency, logical arrangement, personal needs, symmetry and so on. These can become the basis of a new familiarity, and will aid learning."</p><p>(Earlier, I had planned for a one-piece keyboard with significant hand separation. Adding three or four intermediate columns would provide seven or eight key separation of the hands compared to two key separation on a typical keyboard. Even without that addition — or placing a trackpad in the middle ☺ — the separation would be three keys more than a typical keyboard. A 17- or even 19-key-wide keyboard might be practical on a laptop.)</p>
<p>I am not absolutely convinced that the assignments are the best possible for the spatial layout nor that the spatial layout is the best management of the tradeoffs of aesthetics, familiarity, and ergonomics. However, I do think that this is a solid design.</p>
<p>I am slightly concerned about the use of a layer to access numbers and navigation. Since I am already a little familiar with a number pad arrangement and the layer key on each hand seems less difficult to reach (located where the zero is for the right hand on a typical number pad) than the shift keys on a typical keyboard, I am hoping that entering numbers will not be problematic. The advantage of having numbers be more reachable and less subject to error than when using a number row seems to justify this choice.</p>
<p>The yellow keys positions seem of questionable utility, but there presence would not affect the width even of split sections. I do think that providing F0 through FF for function keys is cute. The other six yellow keys might be useful for infrequently used operations for which quick access might still be useful (e.g., mute, screen lock).</p>
<p>Adding more layers has some attraction. Highly modal operations (such as navigation and system control) would seem to fit the layer model. While double-tap of the layer key might reasonably 'lock-in' a third layer (with a tap of the layer key to unlock), accessing additional layers would seem to present some learning challenges.</p>
<h3>Motivation</h3>
<p>My motivations for this effort seem to include the general fun of spatial design with some analysis of tradeoffs, a desire for a more ergonomic keyboard — particularly in moving modifier keys off of my little fingers, one of which has developed bone spurs —, and a desire for a sense of accomplishment in a simple hardware-ish project.</p>
<p>The earlier mentioned <a href="https://mattgemmell.com/the-corne-keyboard/">article by Matt Gemmel</a> was very helpful in guiding my thinking about key placement. I am a little jealous of his four-layer system ("base" with letters and some common keys; "navigation" with both mouse and cursor controls; "numpad" with numbers and symbols; "adjust" with media and screen controls). I am not certain I could learn to manipulate a mouse pointer well with a keyboard, but the concept is intriguing. Media controls would seem to map easily (in terms of remembering the mappings) to the navigation (page up and page down for volume control, left for rewind, right for fast forward, down for play/pause ("here"), up for stop (intensified [above] pause)</p><p>My design has 62 "essential" keys (excluding the 25 "yellow" keys) compared to the Corne's 42 (though four modifier keys are replicated for each hand), so there is less pressure to add layers, but layers can take advantage of associations and exploit modal activity.</p>
<h3>Implementation and Future Considerations</h3>
<p>If I were to implement such a design, I would be rather inclined to use rubber dome pressure switching primarily for lower cost (financial as well as time and effort, both start-up and incremental) but also from some concern about noise and required finger pressure. The concern about noise and finger pressure could be addressed by using more expensive, quiet linear switches; even without such added costs, the barrier to trying is high enough that I seem unlikely to try. I very much dislike being stuck with relatively expensive items that I am not going to use and that I can not give away to someone who really wants such. (I also have considerable issues with fear of failure, sometimes including self-sabotage — if one does not hope to succeed and does not put in the required effort, failure is not a crushing statement of how worthless one is. Thinking about things until there is no risk is generally not a productive strategy.☺)</p>
<p>With more intelligence in the keyboard, double-tap and hold could be distinguished from a regular keypress, some idioms could be automatically composed without having to use the compose key (e.g., "--" → "—"), and perhaps the repeat rate might be usefully configured differently for different keys (including different acceleration). There might also be some uses for multiple non-modifier key presses at the same time, at least for the home row.</p>
<p>While a split design would require more modifications to a typical keyboard case and complicate the electronics (with financial and effort costs), it might be a little more portable, has some ergonomic advantages, and has a little more "coolness". If a separate microcontroller was used for each hand, it might be easier to increase the number of overloaded keys (supporting simultaneous pressing). A split design would also complicate the connection; implementing this as two USB devices might be simpler for an initial prototype. Given my motivational issues, getting something earlier that mostly works could be important.</p>
<p>If I actually successfully produced a version of the above design, I might well be tempted to pursue a "second system" to include additional features. Including a USB hub seems especially desirable as such would provide for convenient insertion of a thumb drive and a Ubikey as well as a mouse. Adding a USB hub would require considerably more expensive hardware, but it would also add some fun of considering how ports should be placed. I suspect two rear ports and two side ports might be a good basic design; the rear ports might be reasonable for attaching persistently present devices such as a mouse while side ports might be a little more accessible. Vertical ports would seem to avoid potentially having to hold the keyboard to insert a device but such might be difficult to fit in the keyboard form factor.</p>
<p>(I am surprised that including USB ports in keyboards is not more common; even most of the (somewhat expensive) ergonomic keyboards seem not to include such. Plugging a mouse into one's keyboard would allow for a shorter cord — yes, I am aware that wireless mice exist, but I am turned off by having one more set of batteries to deal with and a little uneasy about reliability — but having USB ports readily at hand for Ubikeys, thumb drives, and mobile devices seems even more significant. Supporting fast charging through a keyboard might be expensive, but for my uses that would not be necessary.)</p>
<p>It would be really cool if this design (or one similar to it) was popular enough to support a modest production run (~100 devices), but that seems very unlikely. </p>Paul A. Claytonhttp://www.blogger.com/profile/04389349203483308643noreply@blogger.com0tag:blogger.com,1999:blog-8558990029928903827.post-46566334162657955412017-07-30T16:34:00.002-04:002017-07-30T16:34:57.921-04:00The phases of work<p>Ordinarily, work is a gas, expanding to fill its containing time. Under intense pressure, supercool it and work can become a coherent liquid, but it is extremely difficult to get solid work. (If one puts enough energy into work it may become a plasma, but then one really wants to be sure it is safely contained or it will consume everything around it.)</p>Paul A. Claytonhttp://www.blogger.com/profile/04389349203483308643noreply@blogger.com0tag:blogger.com,1999:blog-8558990029928903827.post-84581642349636908052017-05-07T22:39:00.002-04:002023-01-19T13:32:45.650-05:00Spider-page Theme Song<p>Joking with a fellow page at the Silver Spring Library about becoming an extremely effective page by being bitten by a radioactive spider (given the general understaffing at the library), eventually led me to thinking about "the Spider-page". I had a few general thoughts about a comic, but today I finished the "Spider-page Theme Song":</p>
<p>Spider-page, Spider-page. Working hard for his spider-wage.<br />
Sorting books, fronting shelves. Outworking book-work elves.<br />
Look out, there goes the Spider-page.</p>
<p>Is he smart? That's insane! He's got more than an average brain.<br />
Can he read faded ink? His eight eyes never blink!<br />
Hey, there! Work for the Spider-page.</p>
<p>When a book's out of place anywhere in the stacks<br />
You will feel your heart race when you hear his web thwacks! (Shhh!)</p>
<p>Spider-page, Spider-page. Local library Spider-page.<br />
Bonuses, he ignores. Order is his reward.<br />
Look out, there goes the Spider-page.</p>
<p>Spider-page, Spider-page. Local library Spider-page.<br />
Bonuses, he ignores. Order is his reward.<br />
For OCD he has 'fessed up. Whenever things are messed up<br />
Go find the Spider-page!</p>
<p>It is not a perfect parody of the <a href="http://www.azlyrics.com/lyrics/ramones/spiderman.html">Ramones' "Spiderman"</a>, but it seems somewhat decent.</p>
<p>While the "book-work elves" came only to make the rhyme, it did lead to the thought that in a comic most of the elves would be grateful for the Spider-page's work (perhaps allowing them to do more tradition book-elf work like repairing books) but one elf (perhaps an especially big and strong one, thinking that the book-elves would be like the shoe-brownies in size) would be annoyed at his special ability (physical strength) has been made less important becoming an antagonist.<br />
In a comic, the "web thwacks" would be humorously inappropriate noise in a library (and contrasts Spiderman's quiet "thwip").</p>
<p>The obsession with order is probably not an uncommon feature for pages, and exaggerating such fits a comic. The changes to the last lines of four of the stanzas seems to also fit with the Spider-page being less agile and even less respected than Spiderman ("Look out, there goes the Spider-page" has more of a sense of "Watch out, coming through" vs. "Pay attention and you can see a superhero" and "Work for the Spider-page" and "Go find the Spider-page" imply that his OCD is somewhat abused to give him the less desirable tasks). The eight eyes also presents more spider-like features; in a comic, I think he would have a spindly limbed appearance perhaps with an expanded abdomen — incidentally fitting a particular nerdy stereotype so that there could be humor in his becoming more nerd-like in appearance after becoming a superhero. Where Spiderman has a tingling spidersense; the Spider-page could have a throbbing page-sense, the transformation enhancing a vocational/human trait rather than providing a quasi-spider trait.</p>
<p>I very much doubt that a comic will develop from this (not even a single story). While I could probably do a stick-figure mock-up with dialogue given a substantial time investment, a serious effort would require someone with skill at drawing. If I had strong motivation to work on such a project, it might be practical (artists can be hired), but so many other potential projects are so much more attractive (more suited to my skills, more likely to be useful/appreciated by others, better matching my affections) to me that making a Spider-page comic seems unlikely (though the concept seems fun).</p>Paul A. Claytonhttp://www.blogger.com/profile/04389349203483308643noreply@blogger.com0tag:blogger.com,1999:blog-8558990029928903827.post-85044316449621807682016-08-02T08:46:00.000-04:002016-08-02T09:10:58.645-04:00Library Page SongThe following lyrics playfully parallel "Day-O", with appropriate adjustments for library pages:<br />
<blockquote>
<p>
Hey-yo, hey-yo<br />
Library closed and me wan' go home<br />
Hey, me say hey, me say hey, me say hey<br />
Me say hey, me say hey-yo<br />
Library closed and me wan' go home </p>
<p>
Work and work with a bit of fun<br />
Library closed and me wan' go home<br />
Shelve those booksies till me shift is done<br />
Library closed and me wan' go home </p>
<p>
Hey, circulation staff, tell me me can go now<br />
Library closed and me wan' go home<br />
Hey, circulation staff, tell me me can go now<br />
Library closed and me wan' go home </p>
<p>
Shelve one full, two full, three full trucks<br />
Library closed and me wan' go home<br />
One full, two full, three full trucks<br />
Library closed and me wan' go home </p>
<p>
Hey, me say hey-yo<br />
Library closed and me wan' go home<br />
Hey, me say hey, me say hey, me say hey, me say hey, me say hey<br />
Library closed and me wan' go home </p>
<p>
Our lovely stacks o' nice fronted booksies<br />
Library closed and me wan' go home<br />
Hide the friendly little mousies<br />
Library closed and me wan' go home </p>
<p>
Shelve one full, two full, three full trucks<br />
Library closed and me wan' go home<br />
One full, two full, three full trucks<br />
Library closed and me wan' go home </p>
<p>
Hey, me say hey-yo<br />
Library closed and me wan' go home<br />
Hey, me say hey, me say hey, me say hey ...<br />
Library closed and me wan' go home </p>
<p>
Hey, circulation staff, tell me me can go now<br />
Library closed and me wan' go home<br />
Hey, circulation staff, tell me me can go now<br />
Library closed and me wan' go home </p>
<p>
Hey-yo, Hey-yo<br />
Library closed and me wan' go home<br />
Hey, me say hey, me say hey, me say hey<br />
Me say hey, me say hey-yo<br />
Library closed and me wan' go home</p></blockquote>
<p>
While the meter is a bit off in several places and the use of "circulation staff" conflicts with the grammar and vocabulary style of the rest of the song, I am somewhat happy with the result.<br />
</p>
<p>
Of course, the lyrics are not an accurate portrayal of working as a library page. One might shelve three full trucks of books in a shift, but one does not need to ask staff if one can leave and most page shifts end before the library closes. Mice have been seen at the Silver Spring library, but they probably do not hide among the stacks. (I also sometimes wish I could stay overnight to do things at a more leisurely pace and work on lower priority tasks.)</p>Paul A. Claytonhttp://www.blogger.com/profile/04389349203483308643noreply@blogger.com0tag:blogger.com,1999:blog-8558990029928903827.post-2470384200022107612015-03-29T14:44:00.000-04:002015-03-29T14:44:16.425-04:00A devotional thought on the sacrament<p>In coming to receive Communion today, my mind was drawn to the marital vow "with this ring I thee wed" and somewhat quickly went on with "with my body I thee worship". The first presents the sacrament as both an outward sign and an actual seal of covenant relationship as a wedding ring can act both as a sign of being married and a signature realizing the marriage. The second represents how the incarnation ("with my body") — and particularly the death of Christ which is proclaimed in the sacrament — both declares God's love (considering his beloved worthy) and declares the beloved to be worthy as an effective, creative act.</p>
<p>Interestingly, that part of the wedding vows from the Book of Common Prayer then adds "with all my worldly goods I thee endow", marking a distinction between ordinary grooms and Christ since the Father of this Groom "has blessed us in Christ with every spiritual blessing in the heavenly places" (Ephesian 1:3, ESV), and, of course, if the Father has given his best and most precious, "how will he not also with him graciously give us all things" (Romans 8:32, ESV).</p>Paul A. Claytonhttp://www.blogger.com/profile/04389349203483308643noreply@blogger.com0tag:blogger.com,1999:blog-8558990029928903827.post-81549475929804173392013-09-02T11:36:00.000-04:002013-09-02T11:36:31.620-04:00Minimum knowledge for saving faith<p>Today I was disappointed by a statement of John MacArthur. On his <em>Grace to You</em> radio program a question was asked about what knowledge was necessary for salvation. The asker had debated with a friend who believed that even today people could be saved like Abraham by a relatively uninformed faith while he maintained that more extensive knowledge was also necessary (e.g., in light of Romans 10:14-15--how can they call on God unless preachers are sent?).</p>
<p>John MacArthur initially stated that the asker was correct, correctly pointing out that knowing God as creator was certainly insufficient (that knowledge of God's holiness, of one's falling short, and of God's providing a way of reconciliation was necessary) and that God was certainly powerful enough and providential enough to bring the good news to whomever he has chosen. Yet he later added that it was possible that God could bring saving faith to a person without natural transmission of the gospel.</p>
<p>While I recognize that he was answering spontaneously (and his knowledge and reason is impressive in this context), I would wish that he had rather said something like: </p>
<blockquote><p>You are mistaken in fact, but your friend may be mistaken in spirit. Saving faith does not require specific information about the mechanism but only a proper awareness of the need for salvation--that God is holy and one is a sinner--and that God is able and willing to save.</p>
<p><em>However</em>, anyone who has been made alive by the Spirit will long to look into such things and will rejoice in the truth of the gospel as it is revealed.</p>
<p>The question is like asking if a few scraps falling from the table at a great feast is sufficient to sustain life. While such dirty scraps may be sufficient, it is madness to suppose that one would be content to allow just the aroma of the feast to stir hunger and bring people toward the table and to allow such people to subsist on scraps. Like the aroma of a feast, the Spirit may directly make people aware of the need and draw people to salvation, and perhaps some such may not hear the gospel in this life--subsisting on dirty scraps.</p>
<p>However, focusing on what is barely sufficient denies the spirit of the feast--grateful joy and abundance--and presents the danger of coming to believe that merely eating dirt is sufficient for life.</p>
</blockquote>
<p>Given that I am not really satisfied with the above suggested alternative (e.g., there is no mention of the necessary reaction to the minimum knowledge) despite having more time to work on it, it should not be surprising that even John MacArthur would not always provide a spontaneous and fully satisfactory answer to such a question. One might argue that the spontaneous nature of the responses was unnecessary, but this form of question and answer may have been helpful for the original askers (and replaying such might best exploit his limited time) and the form itself may remind listeners of the importance of being prepared in season and out of season.</p>
<p>While it is good not to be satisfied with anything less than perfection, it is also necessary to rejoice in <em>every</em> grace and trust in God's providence--that even imperfections are compelled to declare the glory of God.</p>Paul A. Claytonhttp://www.blogger.com/profile/04389349203483308643noreply@blogger.com0tag:blogger.com,1999:blog-8558990029928903827.post-4258756241777806162012-07-14T17:00:00.000-04:002012-07-14T17:00:11.780-04:00Balance vs. fullness [a bit of a rant]<p>The popularity of using balance as a goal bothers me. Among the pitfalls of using balance as a goal are not recognizing common benefits (and along with this an antagonistic viewpoint where any benefit toward one aspect tends to be viewed as a detriment toward others--"balance" even seems to imply a dualistic perspective) and attempting to establish a single metric which can be applied equitably to all interests.
</p>
<p>One problem with a single metric is that it becomes very tempting to choose one that is relatively obvious and easy to measure and apply it is a simple manner. E.g., a balanced taxation proposal might see individual income as an easily measured quantity and apply a flat (equitable by income) tax. However, even a flat tax based on discretionary income might be inappropriate. (Interestingly, a flat tax on assets would seem to have some attractive properties. Such would encourage the use of assets to increase productivity. Unfortunately, assets are more difficult to measure than income [education, aptitude, health, etc. are assets which influence potential productivity] and not all productivity [social good] generates income with which to pay a tax.)
</p>
<p>(It is tempting to see this issue as analogous to the issue of the nature of Christ, where the "balance" perspective proposes that Christ's nature is part-human and part-Divine where the "fullness" perspective proposes that his nature is fully human and fully Divine. Such an analogy might be improperly biased in favor of my own perspective rather than seeking an understanding of truth.)
</p>
<p>The use of fullness (with the concept of perfection or perhaps complete integrity) as a goal may avoid some psychological/moral issues, but it seems to draw out significant measurement issues (which has the good aspect of forcing thought and recognition of complexity but the bad aspect of potentially disintegrating into a contemplation of [or argument about] measurement rather than adopting a more integrated perspective which recognizes that while establishing measurements helps to clarify goals [at many levels] and estimate progress [and so guide resource allocation, including time], establishing measurement is a servant of other aspects [while the other aspects likewise submit to measurements]).
</p>
<p>When using perfection (fullness/integrity) as the goal, one must also maintain a sense of context. A human goal of perfection takes into account finitude. A result that is perfect at a given time may easily be a very poor result if the achievement of that result is necessarily delayed by resource limits. Likewise a dependence on grace seems to be necessary (this may be related to Martin Luther's "sin boldly"). Knowing that even stupid failures (and many failures are 'recognized' as stupid in hindsight) are not damning provides a freedom to strive for perfection rather than being paralyzed by uncertainty (the measurement problem)--or even the certainty of failure (the recognition of inadequacy)--or settling for a safe result (like burying the talent [Matthew 25:24-30]).
</p>
<p>In <i>Mere Christianity</i> C.S. Lewis wrote "The only fatal thing is to sit down content with anything less than perfection." This is not a paralyzing perfectionism (which is a significant issue for me), but a call not to stop being a soldier until the war is won. Such a seeking of perfection not only motivates a full commitment of effort (sometimes enabling success) but sometimes produces an accidental success beyond expectation.
</p>
<p>(My own perfectionism--both from uncertainty of what should be done and from perception of inadequacy--very often leads to inactivity, which is very far from the striving for perfection that is the human calling. On the positive side, a more proper fondness for perfection may be involved in my perfectionism--i.e., my perfectionism may in part represent a corruption of a particular gift of affection for the highest and best.)
</p>
<p>Of course, the very use of "vs." in the title demonstrates how easy it is to fall into an antagonistic (rather than holistic) perspective.
</p>Paul A. Claytonhttp://www.blogger.com/profile/04389349203483308643noreply@blogger.com0tag:blogger.com,1999:blog-8558990029928903827.post-10784878674292576752012-04-18T18:11:00.000-04:002012-04-18T18:11:44.594-04:00Nitpicking on "No Silver Bullets", part 1, HLL disadvantages<p>A <a href="http://www.johndcook.com/blog/2012/04/17/no-silver-bullet/">recent post</a> on John Cook's blog mentioned a presentation by Mike Swaim (titled "No Silver Bullet") about the advantages and disadvantages of certain programming techniques. Looking through the PDF, a few points bothered me. (Yes, nitpicking over bullet points is not exactly fair, and I am not at all claiming that the presenter was not aware of the issues.)</p><p>The disadvantages of high level languages were basically listed as reduced performance, "inappropriate runtime requirements" (which might be excessive resource requirements or things like garbage collection pauses in a tightly constrained hard real-time environment), and (based on the NT example) the inability to fix or even work around errors in architecture/design or implementation.</p><h2>Performance issues</h2><p>It seems that the performance problem comes from two issues. First, the compiler (and runtime system) does not fully exploit the available information. This comes from two sources: the lack of machine resources allocated to the compiler and the lack of programming effort applied to developing systems for translating higher level specifications to machine language code. While this can be seen as approaching artificial intelligence requirements, the difficulty of the goal is not as great as for generic artificial intelligence.</p><p>Some strengths of existing computing systems could be more fully exploited, most particularly the relative abundance of idle time on many computing systems and the relative reliability of detailed long-term memory. Offloading some analysis effort of the compiler to development time could provide more extensive analysis without increasing compilation time. Caching information about software could also substantially reduce the amount of analysis required at compilation time (or run time).</p><p>The software effort problem is unnecessarily hindered by issues with the definitions of behavior for programming languages (which can include behavior expected by existing software)<br />
<p>The second issue is the lack of communicating necessary information to the compiler (and runtime system). Information which is known (or assumed) by a human programmer is often not communicated to the compiler, and frequently the compiler is not allowed to ask questions for clarification. In some cases, the programming language provides no mechanism for communicating such information and seemingly more often makes communicating such information more difficult by the use of inappropriate default semantics (e.g., C pointer/array aliasing), discordant syntax for adding information (e.g., using __builtin_expect versus something like <code>assuming (condition) {stuff to do} else {other stuff to do}</code> or <code>case 5 probability=20%: stuff to do</code>; requiring the use of a different dialect or even language to express information adds an unnecessary barrier), and inability to use layered abstraction (e.g., providing a generic implementation of an operation with an optimized implementation that can "overload" the generic implementation with constraints [additional information] expressed so that the optimized implementation can be validated; furthermore optimized implementations could be provided as hints--"compiler, try this implementation"--or directives--"compiler, do it this way" and multiple implementations could be provided for different constraints--and the compiler might even use different implementations to generate a new implementation).</p><p>This second issue is compounded by the relatively weak support for profiling. Not only is the generation and use of profile information more complex than strictly necessary, but the overheads for generating such information seem to be greater than necessary. In addition, some information is generally not gathered by typical profiling tools. E.g., the temporal distribution of values for a variable and even the temporal correlation of values for different variables can be as important as the net frequency of values. Likewise, infrequent and limited use of whole program analysis prevents the compiler from using even relatively simple transformations in data structures and algorithms.</p><p>(Compiler writers not understanding computer architecture or being informed about specific microarchitectures--not necessarily by choice--does hinder the development of better compilers.)</p><p>While providing additional information would in many cases reduce the productivity advantages of a higher level language, in many such cases, the use of "overloading" implementations might provide good performance without losing the maintainance and extension advantages of the higher level representation. When a constraint of an optimized implementation is violated or the optimized version no longer matches the generic version, these effects can be noted allowing a decision to accept the optimization loss or rework the optimized implementation. In some cases, the reworking could be fully or partially--i.e., suggested fixes provided--automated. In some cases, multiple optimized implementations could be used to synthesize new implementations to better fit resource availability or optimization goals.</p><p>As an example, in many cases it should be possible for a compiler to recognize that the keys of a hash are immutable or restricted to a particular set of values or are accessed in a particular order, allowing a simple programming concept to be implemented in an optimized fashion. Providing directive or expectation information may allow the use of a higher level construct like a generic container while allowing the compiler to use an optimized implementation--preventing or generating a fatal error in the case of directives, dynamically recompiling or using an alternative implementation in the case of expectations. In theory, a hash which uses only unsigned integer keys could be implemented as an array. One might even generalize the hash concept to provide a syntax like <code>container_of_records[element=variable]</code> to index the element (or a collection of elements) where the record member 'element' equal the value in 'variable') with the ability to imply 'element=' based on the type information of 'variable' or the use of a default element (perhaps indicated by a qualifier in the definition like 'indexer' or by some precedence mechanism--though such could make bugs too easy to generate and too difficult to find) or a combination (where multiple indexers are provided and type information and/or precedence is used to determine which is used).</p><p>(Expectation information can also be used to reduce compilation effort. If the expectation is correct, the only overhead is in evaluating the expectation--compared to searching through all possibilities. If the expectation is incorrect, it may be acceptable to use less aggressive optimization and provide a notification or even delay final compilation of the component dependent on the expectation until a new expectation is provided by the programmer or possibly by some automated method, potentially taking advantage of lazy evaluation of the program source code. Obviously, the communication of the expectation should be lightweight, perhaps something like <code>Some_record_type container[uint?]</code> to define a container of 'Some_record_type' values that is expected to be indexed by unsigned integer values.)</p><h2>Runtime Requirements</h2><p>Runtime requirements issues would seem to have similar solutions as the performance issues. While it would be unreasonable to expect existence of even the limited form of artificial intelligence necessary to fully solve this issue, a more moderate degree of intelligence (or extensive searching) would seem likely to greatly reduce the problems. In particular, it is disappointing to me that automated memory management is often exclusively implemented as generic garbage collection. I suspect that in many cases the source code provides sufficient information to determine when a resource can be freed or when reference counting would be a better choice (with or without a saturating reference count--where a saturated reference count might indicate the use of a generic garbage collection mechanism); likewise, choices of allocators (region/stack/FILO/FIFO, binary buddy, fixed-sized arrays, limited size diversity exploiting expected frequencies and best fit with limited splitting and fusing of default partitioning, object caching, etc.) could be optimized by a compiler (or perhaps even a runtime system).</p><p>In addition, a high level language should support the ability to use lower-level directives (as well as communicate expectations) without degenerating into bare assembly language. (If assembly language is used, it should be annotated with constraints--obviously at minimum the assumed ISA--and a higher level implementation should be provided, which ideally could be used to validate the assembly language implementation.) Even if the lower-level directive cannot be validated against a higher-level representation or other information in the source code (or specification)--e.g., one might use a bald assertion which is not checked at runtime and need not be validated at compile time (though notification would be provided if compile-time validation was not possible), information would be available in the source code about the assumptions made and automated tools could use such assumptions to provide assistance in debugging. In some cases, debugging aids could be selectively applied (to reduce overheads). In the case of memory deallocation, in some cases it might be possible use virtual memory with page deduplication or compression to reduce the cost of not reusing deallocated memory by overwriting the <br />
deallocated region with a given pattern.</p><h2>Unfixable Environments</h2><p>While the problems of bugs in the design or implementation of a runtime environment cannot be completely avoided (though using formal methods would limit such bugs to the design--and the development tools), using a modular design even in the runtime environment would seem likely to substantially reduce the impact of many design and implementation bugs/misfeatures. Allowing individual components to be swapped out would seem to greatly facilitate working around implementation and even design issues. Even layering a foreign interface over a required system interface might be practical in some circumstances. E.g., it might be possible to provide a specialized automated memory managment system over a generic garbage collector with modest overheads by using a minimum number of allocations using the system memory manager. Ideally, the functionality of such module implementations could be validated according to a higher level specification.</p><p>Unfortunately, not only are the inadequacies of tools a barrier to achieving such an ideal, but human factors are likely to hinder adoption. Humans are often disinclined to spend now to save later, often have difficulty admitting mistakes, often have difficulty throwing away a work (perhaps especially if it was well done and only recently became unfit for current or expected use), and often have difficulty recognizing the validity in another perspective (especially when that perspective is poorly communicated or is bundled with aspects that are invalid [Absolute tolerance of falsehood or wrongs is not a virtue, but often saying "you really shouldn't do that" is more appropriate than excommunication. {In the context of programming language design, making something awkward to do is often more appropriate than prohibition--with bonus points if repentance and reform is made easy.} The love of truth should be accompanied by humility and mercy.]).</p>Paul A. Claytonhttp://www.blogger.com/profile/04389349203483308643noreply@blogger.com0tag:blogger.com,1999:blog-8558990029928903827.post-19550773501612922382012-01-26T23:14:00.000-05:002012-01-26T23:14:16.252-05:00A weak case against wimpy cores<p>Rereading Urs Hölzle's "Brawny cores still beat wimpy cores, most of the time" (as part of "Challenges and Opportunities for Extremely Energy-Efficient Processors"), I was again bothered by the failings of his argument.<br />
</p><p>First, he confutes performance with frequency when stating that power use scales roughly as the square of frequency. While perfect scaling (F*V<sup>2</sup> or F<sup>3</sup> where voltage can be reduced in proportion to frequency) is not possible in a given implementation and non-switching power has a significant impact, an implementation optimized for a lower frequency will generally have greater efficiency by using a shallower pipeline (with lower branch misprediction penalties and less pipeline overhead) and/or substantially less aggressive logic (e.g., performing a 64-bit addition in 30 gate delays requires noticeably less redundant operation than performing such in 15 gate delays). In addition, simply reducing the frequency will allow the same size cache to be accessed in fewer cycles which reduces the size of the instruction window needed to cover memory access latency (for on-chip cache hits) and/or reduces the relative loss of performance from waiting on memory (given a constant latency), both of these allow greater efficiency.<br />
</p><p>In addition, frequency is not the only knob that can be turned. Brawny cores sacrifice considerable efficiency in seeking high performance. While Urs Hölzle mentions the larger area and higher frequency of brawny cores as causes of higher power, based on statements on the comp.arch newsgroup by Mitch Alsup that a half-performance core would use a sixteenth of the area, I believe Hölzle underestimates the power penalty of brawny cores.<br />
</p><p>Hölzle further weakens his case by using an example of a hundred fold increase in thread count when his thesis is that anything more than about a two fold reduction in performance from the higher end is increasingly difficult to justify. Even Sun's UltaSPARC T2 processors--which clearly target throughput at great cost in single-thread performance--had much more than 1% the performance of processors in the same manufacturing technology.<br />
</p><p>Hölzle then implies that system cost per unit performance will increase by using wimpy cores because external resources will have to be replicated. While this argument has some strength relative to microservers where the size of the processor chip is reduced, wimpy cores can be incorporated into chips of the same size as the chips using brawny cores, sharing the same resources as a smaller number of brawny cores would. Microservers have some economic advantages in using processors targeted to other workloads (so both design and manufacturing costs are shared), but the argument against wimpy cores should not be based only on this design.<br />
</p><p>Hölzle also misses the fact that a single chip could easily (and all the more in an era of "dark silicon") have a diversity of cores. (Ironically, the other presentation listed as a reference a paper--"Reconfigurable Multi-core Server Processors for Low Power Operation"--that presented such a heterogenous design. This paper also presents one of several possible ways of using clustering to provide a range of single-thread performance with a single hardware implementation, which seems a promising area for research. [SMT is somewhat similar in allowing a single implementation to scale to a larger number of threads, though with an emphasis on single thread performance and so sacrificing more efficiency on highly threaded and low-demand workloads.])<br />
</p><p>An additional advantage of greater energy efficiency is the greater ease of more tightly integrating at least some memory in the same package as the process (allowing increased bandwidth and/or energy efficiency). Furthermore, by reducing the number of power and ground connections, more connections can be used for communication (with memory, I/O, or other processors).<br />
</p><p>Wimpy cores may have an additional advantage in that, being simpler and smaller, they can be more quickly woken from a deep sleep state and can be kept in a less deep sleep state with a lower power cost. This would faciltate a faster transition from idle to a light or moderate workload.<br />
</p><p>There is also a factor that more efficient wimpy cores in a heterogeneous chip multiprocessor can be used for background tasks which do not have the response time requirements of the main workload, while still allowing homogeneous systems (which might be desirable for flexible workload allocation).<br />
</p><p>There is also an implication that the required single-thread performance will continue to increase since the single-thread performance of the higher end processors continues to increase albeit more slowly than before. This may be the case, but I do not think it is a foregone conclusion.<br />
</p><p>While Amdahl's law (both in the obviously serial portion and in the excess overheads from parallel execution) limits the effectiveness of exploiting parallelism, a heterogeneous-core system would avoid much of the impact of this limit.<br />
</p><p>The software challenges in exploiting wimpy cores (even--perhaps especially--with heterogeneous CMPs) are signficant, but Hölzle's argument seems particularly faulty (even if it may be less faulty than the arguments of some "wimpy-core evangelists").<br />
</p>Paul A. Claytonhttp://www.blogger.com/profile/04389349203483308643noreply@blogger.com0tag:blogger.com,1999:blog-8558990029928903827.post-3930617027288659562011-12-14T08:40:00.000-05:002011-12-14T08:40:03.760-05:00Enclosing marks (semantics and readability [with nesting])<p>In my writing I have a tendency to excessively use enclosing marks (parentheses, square brackets, and curly brackets mainly, but also double quotes and single quotes [really matching apostrophes]--dashes should probably also be included in the list, though a dash enclosure can terminate with another termination mark [e.g., period, closing parenthesis]).<br />
</p><p>This seems to come partially from a self-conceit issue, which presents my thoughts as being less worthy (and so deserving the de-emphasis of parentheses--don't attack me for making stupid comments) and raises doubts about the clarity of my expression (so that explanatory notes are appended in parentheses--don't attack me for being unclear and don't attack me for stating what only an imbecile would not know). Another motivation seems to be a desire to indicate that the content is not essential (whether explanatory [including adding details] or speculative [which might include adding variants])--sometimes being important content (e.g., content without which later statements would be very difficult to understand) which is "subservient" to the non-enclosed content (somewhat like nested lists in html).<br />
</p><p>While I have sometimes been able to reduce the number of parenthetical comments, it is difficult to do so because the enclosures provide a sense of structure and (I perhaps irrationally hope) allow a communication of details without excessively disrupting the flow--the reader could scan more quickly over the enclosed comment. In some cases, extended parenthetical comments can be broken off into new paragraphs (paragraph breaks do indicate a transition but do not indicate the "subservience" of the material).<br />
</p><p>While sidenotes, footnotes, and endnotes can provide some of this functionality, the composition tools and presentation tools do not seem to support ease of composition and ease of reading. This is particularly problematic in the presence of even limited media independence. In part because of the lack of excellent navigating tools, sizing html pages to match the functionality of pages in printed media (such that footnotes could be used) would be inappropriate. Even limited layout independence makes presenting notes at the bottom of a view port very difficult (and, in the case of longer notes within a small view port, the note might not fit in the same view port as the text that references it). A reasonable compromise might be the use of a separate, bordered block after each paragraph containing note marks.<br />
</p><p>Notes also have the issue that there does not appear to be a standard syntax for note marks. Although numeric marks are sometimes used to indicate references (which usually are suitable for endnotes) while asterisks, daggers, and double-daggers indicate comments, there does not appear to be a convention for indicating significance, length of the note, or nature of the note (explanatory, side comment, extension--even references can vary in nature, some being primarily crediting of ideas and some being more about extended explanation or context [and a reference might include an extended quote, which would not be suitable for inclusion in the flow of the text, followed by a reference to the source]). Traditional notes also do not provide a context for the note; a note might apply to a word or term, a phrase, a clause, a sentence, a paragraph or even a larger section of text (though an unreferenced endnote would probably be appropriate for notes relating to a whole section). For shorter contexts, highlighting the context seems reasonable (and the nature and strength of the highlighting could be used to indicate additional information about the link), but even weakly highlighting larger contexts would be distracting (an alternative might be to use a sidenote-like marking, perhaps a vertical bar, though I think that might not be helpful).<br />
</p><p>Anyway, a significant problem with extensive use of enclosing marks is that nesting urges a means to distinguish levels of nesting. While I commonly use the sequence parenthesis, square bracket, curly bracket (and, if necessary--though this usually indicates a need to reformat--, back to parenthesis), this has the significant problem that square bracket can be used with special semantics (e.g., to indicate a quoted mistake [sic]--which really is an inferior marking because it does not indicate the context or nature of the error nor does it guard against the error being altered in transcription).<br />
</p><p>A similar problem arises with quotation marks. Sometimes quotation marks are used to indicate approximately used or nonce terms, but that can introduce confusion for short quotations. Likewise nesting of quotations and similar enclosing marks can be difficult.<br />
</p><p>Obviously, I should reduce my use of parenthesis, particularly with respect to guarding statements. Yet there remains an issue of communicating the importance, relevance/context, and nature of statements. Parentheses also provide a better visual clue of separation than commas, which may make parsing easier (particularly for short comments). In more casual writing such as a blog or forum post, there is less incentive to rework the text to maximize readability; but even in a more formal composition (especially when there is not a length limit to constrain content) there is a place for "inline notes".<br />
</p><p>Perhaps a better but impractical mechanism would be to use styling to conditionally hide content. The reader would choose the style and the user agent would hide, include, use inline or "near-line" revealable/hidable notes, and other presentation methods to present the content of interest to the reader. E.g., the desired degree of detail, the knowledge base of the reader, the interest of the reader in side comments, or even more complex factors like reputation of the writer among peers, with the reader or the reader's trusted group on the topic--such factors could be used to tailor the presentation of a text to provide a better reading experience. (A side issue with revealing and hiding notes is that such breaks the visual recognition of the reader. The layout of text seems to give powerful visual clues to readers which allow scanning a previously read text very quickly. While text search tools can remove some of the need for such, there remains some difficulty when the reader cannot remember a precise phrasing or worse in terms of automated search if the reader only remembers that there was an interesting comment in the text or a portion of the text.)<br />
</p><p>In reading academic papers, even such niceties as indicating if the reader has read a particular reference--or a similar version--or an earlier paper on the same basic idea and including any quick notes the reader may have attached to the work. Likewise bibliographies and notes on authors could be useful in filtering and directing interest in references.<br />
</p><p>It seems that there should be a better mechanism of communicating (though a large fraction of my poor communication practices come from lack of effort), but I do not know how the presentation of text (and so its composition) can be substantially improved without extraordinary effort from the writer (or the reader).<br />
</p>Paul A. Claytonhttp://www.blogger.com/profile/04389349203483308643noreply@blogger.com0tag:blogger.com,1999:blog-8558990029928903827.post-79019657796384344182011-12-10T23:13:00.000-05:002011-12-10T23:13:52.028-05:00Beyond keyboard mediocrity<p>It is a somewhat disappointing that there is so little innovation in computer keyboard design--at least for lower-priced keyboards. In thinking about a better keyboard, I think one significant improvement would be a grid (a.k.a. matrix, rows and columns, non-staggered) layout. While it is not clear to me that such a layout would have significantly better finger movement properties--I have seen at least one claim to that effect, but that claim did not take into account the curve of the hand, only measuring physical distance--, it seems that such could have some nice mnemonic features.<br />
</p><p>In particular, one could overlay a number/navigation pad onto the main key space using the equivalent of a capslock to activate. The main benefit of such would seem to be not the reduction of movement per se but the reduction of context switching. Even if one can transfer one's hand to a separate number/navigation pad without looking, a bit more attention must be given to the transition compared to pressing a mode key. (One of the problems I have with vi[m] is that the default direction keys are not "intuitive"; a better mapping using the right hand might be: index finger, left, middle finger, down, ring finger, right, row-above-middle-finger, up.)<br />
</p><p>In addition to a grid layout, it seems appropriate to have a center section with specialized keys. Besides providing a more ergonomic separation of hands during typical use, such might also be more friendly to visual examination (not only would one's hands not block line-of-sight but they might also provide a visual framing context), which may be more common for less frequently used keys, and might be more friendly for from-and-back hand movement than placing such keys to the sides (the strong fingers are near the inside, inward arm movements <em>might</em> be slightly easier) and would seem be be better than placing such keys farther up on the keyboard (again, strong-fingers bias and possibly friendlier hand movement). <br />
</p><p>Of course, a laptop might have a problem with widening the keyboard, even with widescreen displays. A laptop would also favor a lower width-to-height aspect ration in the keyboard; desktop keyboards tend to have a higher aspect ratio than displays have. This issue requires some additional thought.<br />
</p><p>In addition to the above, the QWERTY layout is broadly condemned as substantially sub-optimal. However, the Dvorak layout that some promote also seems a little sub-optimal to me even at a cursory examination. E.g., the 'h' and 't' keys are on the right and left of each other when a common English two-character sequences is 'th' and "rolling" the right hand clockwise seems likely to be easier. (It is understandable that this would not be a consideration in the original design since Dvorak was designed for manual typewriters not keyboards. The placement of vowels might have a similar "rolling unfriendly" nature, the benefit of use-frequency placement might dominate. [Interestingly, I found the following quote on a <a href="http://www.dvorak-keyboard.com/">site supporting Dvorak</a>: "When the same hand has to be used for more than one letter in a row (e.g., the common t-h), it is designed not only to use different fingers when possible (to make keying quicker and easier), but also to progress from the outer fingers to the inner fingers ("inboard stroke flow") -- it's easier to drum your fingers this way (try it on the tabletop)." Now I have to think if "drumming" is more appropriate than "rolling" even on a keyboard--and with a more comfortable separation of hands rolling might be even less natural.) In addition, the arrangement of symbol characters seems likely to be sub-optimal for modern uses--especially for programmers.<br />
</p><p>Somewhat related to layout, a separated grid keyboard <em>might</em> be more friendly to internationalization in that it would be designed to a more common feature (the traits of the human body--variations in hand width, finger length, and other factors are issues with any layout), though a language with a higher count of commonly used glyphs might prefer more keys (that are harder to use) rather than requiring modes or digraph-like operations. Even so, it might be slightly easier to extend a grid layout without conceptual discord since a grid is so regular.<br />
</p><p>Another small issue I have with conventional keyboards is the relative lack of use of thumbs. While having the space key used by thumbs seems good, it seems it might be more appropriate to have separate keys for left and right hands (effectively required for a separated grid layout) and allow modifier keys to be used with the space key. E.g., it might be appropriate for shift-right_space to map to tab and shift-left_space to map to back tab; it might also make sense for the underscore glyph to map to a modified space--such seems to have a mnemonic appropriateness. While thumbs might only be able to use two keys where most fingers can somewhat easily reach three or more keys--and they may have to be oversized keys--, this would still seem to enhance the interface with the hands. (An alternative to keys might be a trackpad for gesture-based input. I have not learned to use a trackpad in place of a mouse, having difficulty with fine control of movement, but simple gestures such as slide up or slide down, with rapid repetition and motion speed variants as well as press and possible press-hop-press could be simple and coarse enough for common use.)<br />
</p><p>It also seems that there might be some opportunities to improve the "mnemonic sense" of the layout, though ergonomic factors should probably have priority. E.g., '!' might fit better as a shifted '.' In a similar manner, mapping the function keys to shifted--or otherwise modified--numbers might be appropriate. (Of course, this would break compatibility with having twelve function keys and with full flexibility in applying modifiers to such keys, but this might be an acceptable sacrifice for most people, especially since it could also make the function keys more accessible. This would also require finding suitable mappings for the symbol keys typically mapped to shifted numbers.)<br />
</p><p>In terms of learning a keyboard layout, as well as avoiding and recognizing mistakes, it is not clear whether "associated" or visually similar glyphs (e.g., 's' and 'z', 'm' and 'n'), should be placed close together or be widely separated. Proximity would seem to more easily allow a hunt-and-peck typist to see the alternate glyph (helping to avoiding typing the wrong key and perhaps guiding vision toward the correct key--seeing something that looks like the desired glyph drawing focus to the area which might then see the correct glyph) and might have some mnemonic value in learning to touch type. (The Dvorak placement of the vowels might have a mnemonic benefit as well as an alternating hand benefit.) Horizontally adjacent placement of opening and closing symbols might be more appropriate than symmetry based on left and right hands. While symmetric placement would balance hand use, the temporal separation of opening and closing would tend to make that immaterial--except in evil languages that force the programmer to use '()' (such an unbearable inconvenience! :-).<br />
</p><p>At the software level, an argument could be made that different types of key presses could have different meaning, particularly for mode setting keys. E.g., a double-press of shift might be an effective mechanism for setting capslock; it has a mnemonic advantage of using the shift key and avoids the need for a separate key for a relatively infrequently used function. Along similar lines, press-and-hold of a mode key might indicate that the mode should only be maintained while the mode key is held; press-and-release could then be used to indicate a persistent mode change. Unfortunately, this could have issues when another key is pressed while the mode key is still depressed; this would be interpreted as a single-key mode duration when the user might have intended persistence. Forcing the user to pause to account for the computer's inadequacies is inappropriate.<br />
</p><p>Many of the above thoughts are not unique to me (though the thoughts came to me before I saw that others had already discovered them). E.g., one person <a href="http://geekhack.org/showwiki.php?title=Island:6292">built a custom keyboard</a> with several of the above features, and the <a href="http://www.trulyergonomic.com/">"Truly Ergonomic Keyboard"</a> has similarities (though it uses a presumably more expensive to manufacture slanted and wavey grid).<br />
</p><p>Another point that is sometimes made with respect to keyboard design is the lack of independent switching. I am not convinced that the cost of supporting such is worth the benefit. On the other hand, with a separated grid layout, it might be practical in terms of manufacturing and usability to provide more simultaneously distinguishable key presses that are from different hands. I have not even begun to think about trade-offs in that area.<br />
</p>Paul A. Claytonhttp://www.blogger.com/profile/04389349203483308643noreply@blogger.com0tag:blogger.com,1999:blog-8558990029928903827.post-91939723258350055542011-12-09T13:53:00.000-05:002011-12-09T13:53:30.101-05:00A side effect of a noble wife<p>Proverbs 31:28-29 declares concerning a wife of noble character that her husband says of her "Many women do noble things, but you surpass them all." (NIV). So beyond drawing praise from her husband, such a woman also causes her husband to respect and recognize virtue in other women.</p><p>This presents wives with a significant potential ministry to the broader culture of the current generation, especially when that broader culture tends to denigrate women. A man who sees that "many women do noble things" (recognizing the presence of virtue in many women and by implication the potential for virtue in even more) is less likely to tolerate the expression of radically contrary views (viewing all women as wicked) in his social circle. A noble wife is perceived not as an exception that emphasizes the wickedness of other women but more as an archetype demonstrating the ideal nature of women.</p>Paul A. Claytonhttp://www.blogger.com/profile/04389349203483308643noreply@blogger.com0tag:blogger.com,1999:blog-8558990029928903827.post-13976680408724719542011-12-09T13:51:00.000-05:002011-12-09T13:51:18.137-05:00Context-based overloading<p>It seems that it might not be that involved to extend the object-oriented programming feature of function overloading based on object type to include overloading based on other contextual aspects like target ISA/microarchitecture. Digital Mars' D programming language provides a <a href="http://d-programming-language.org/version.html"><code>version</code> feature</a> which allows conditional compilation based on ISA and OS, though microarchitectural features are not included. The pre-defined version names also appear to be flat; some degree of hierarchy might be appropriate. Defining such with classes and objects could allow inheritance (e.g., <code>Linux</code> could inherit from <code>Posix</code> [but not necessarily from <code>Posix-strict</code>] and <code>Nehalem</code> could inherit from <code>x86-64</code>), possibly simplifying management of such names.</p><p>As a non-programmer who thinks about programming language design [i.e., an ignoramus but not a complete ignoramus], I think a more elegant expression of such conditional compilation might be to use any statement that evaluates to a boolean compile-time constant as a control for block compilation. The D programming language provides <code>static if</code>, but a short hand form of <code>static if</code> could simply include the expression. Of course, one could then simply provide that syntax for all conditional blocks. It seems that this would also extend to switch-like statements, perhaps along the lines of:<br /><br />
<code><pre>some_function(function_argument) // returns a positive integer
{
== 1
{
do_things_for_1
}
== 2
{
do_things_for_2
}
< 7
{
do_things_for_3_through_6 // note this assumes an
// explicit fall through
// requirement
}
else
{
do_things_for_other_cases
}
}
</pre></code>
That might not be a good extension, however.</p><p>Using a <code>static</code> keyword does force the compiler to guarantee that the expression can be evaluated at compile time. While an advanced development environment would be able to express the compile-time constant nature of an expression (at least for simple cases), there might be some advantage to explicitly indicating the static nature.</p><p>If the C programming language had used <code>~</code> for boolean inversion as well as bitwise complement instead of <code>!</code>, then <code>!</code> might be used as an assert-like indicater, possibly with pre-assert and post-assert forms (analogous to pre-increment and post-increment; pre-assert would be evaluated at compile time, post-assert at run time). Without an associated block, such would act like an assert; with an associated block, such indicate a forcefulness along the lines of "if such is true--and it almost certainly is--then".)</p><p>As an extension (and one that might be compatible with C programmers), one could use <code>?</code> in its pre-expression form to communicate a compile-time hint (like the above pre-assert) that the code in the associated block might be suitable if the expression evaluates to true but the compiler can override that hint (whereas a pre-assert forces the compiler to use the code if the expression is true). One might be able to use a duplication of the symbol to express emphasis, i.e. <code>??</code> in prefix form might indicate a low-confidence hint and in postfix form might indicate a low-probability condition; likewise (if one could use the symbol in that way without confusing programmers) <code>!!</code> in postfix form might indicate "almost certain" (and without an associated block it might indicate a higher level exception or more critical failure), though I am not certain what meaning such should have in prefix form (overriding later overloading--a bit like CSS <code>!important</code>--might make sense, but seems a bit dangerous).</p>Paul A. Claytonhttp://www.blogger.com/profile/04389349203483308643noreply@blogger.com0tag:blogger.com,1999:blog-8558990029928903827.post-54835665030821585782011-12-08T07:15:00.000-05:002011-12-08T07:15:29.982-05:00A small thought on the Trinity<p>Although I have heard expressed the recognition that the Trinitarian nature of God is an essential aspect of God's self-sufficiency in love (i.e., God must express the inter-personal character of love in God's own being), but I have not heard anyone indicating that the begetting of the Son might be a necessary aspect of God as Creator. My thinking in this is that even the creative act (causing to be a non-self which expresses the character of oneself) 'must' have an analogous aspect within the Divine nature both from the necessity of self-sufficiency and from a causal aspect (one's actions reflect one's nature, so even the act of creation itself 'should' have an analogue in the Divine nature).</p><p>While this is not a great insight (even if true), it does seem to show forth a certain beauty (which hints that there is some truth in the concept). Not only does such seem to express a rich integration within the Divine nature but it also makes creation (the act and the being) a mysterious and wonderful analogue of the love of God begetting the Son.</p>Paul A. Claytonhttp://www.blogger.com/profile/04389349203483308643noreply@blogger.com0tag:blogger.com,1999:blog-8558990029928903827.post-66018893504506260262011-12-08T07:12:00.000-05:002011-12-09T13:54:43.502-05:00Computerized Tickler File<p><a href="http://andyglew.blogspot.com/">Andy Glew's blog</a> recently presented a link to an <a href="http://wiki.andy.glew.ca/wiki/Organization,_technology,_evolution_of">article about organization</a> on his personal wiki. This article brought out a few thoughts.</p><p>It seems that a computerized <a href="en.wikipedia.org/wiki/Tickler_file">Tickler File</a> could organize items so that the ticler alert is triggered at an appropriate time (e.g. during a lunch break).</p><p>An even more sophisticated system might use information beyond time of day (e.g., position [seated, standing, walking; workspace, conference room, other's office, hallway, outdoors, restaurant], activity [software application with focus/receiving input; input rate/type; frequency and time of last display update], psychological state [heartbeat pattern, perhaps movement and activity patterns, perhaps information about stress inducing events and healing events]) to trigger notifications (and even time the receipt of messages) to fit one's state.</p><p>E.g., the system could exploit 'necessary' context switches caused by delayed responses to inputs as well as introduce productivity enhancing distractions (e.g., perhaps when it appears one is spinning one's wheels working on a problem), well-timed encouragements/refreshments (e.g., a loved one might leave messages of encouragement--like lunchbox notes--which the system could pop up at an appropriate time or a reminder of an upcoming positive event like a work group BS session or one's child's first school play), or even advisory notices (for most people advice has to be carefully presented; rather than "You're getting tired; you should consider taking a break" some might respond better to "Your favorite muffins will be fresh in the cafeteria in five minutes" or "Remember that you wanted to talk with Joe about such and such.").</p><p>A human manager cannot afford to be aware of fine-grained conditions (and most people's sense of privacy and self-sufficiency would be violated by such). Even a human 'executive assistant' would not be able to properly support a worker at the granularity that a computer could (and being impersonal, a worker might not have as much concern about privacy--if only the computer knows--or dependence/competence; on the other hand, being corrected by an idiot computer can be more painful/frustrating than being corrected by a semi-competent human being).</p><p>In the article, Andy Glew also noted that the artificial size limit of ScanCards was inappropriate for a computer, but it seems to me that sizing can be a disciplining factor to push for concise presentation and might also be useful to present a visual hint of the nature of a note (size of a note and size of font can give clues about complexity, importance, etc.). It might be good to have a standard size note and use hypertext, elision/abbreviation (which can be conditionally unelided/expanded), and font shrinking to include more information.</p><p>I am not certain how to handle hypertext display. Endnotes can be frustrating because even with back navigation one can lose one's place in reading; even parenthetical expansion--where one would place a note mark in parentheses and an activation would expand/make visible the note--can interfere with visual orientation because the layout of a paragraph will change (vertical layout changes could be handled with a sidebar that changes color/shade/pattern based on semantic boundaries like sentence starts--interestingly, such a sidebar mechanism might be used to mark content density [side marks have been used to mark importance and other aspects of a text]; such would allow a reader to orient by the sidebar even if insertions moved text).</p><p>It is also frustrating that a single mechanism is used to provide notes of different kinds; notes can differ not only in length but also in tightness of the relevance--applies strictly to word, sentence, paragraph or ties to word, sentence, paragraph but applies more generally--, level of knowledge expressed in the note and expected of reader, nature of the note--e.g., a historical note like "this varies from version 2.3 . . ." differs from a warning note like "this is atomic not synchronized" or an explanatory note like "this is a variant of the Smith-Jones algorithm". Sometimes something like "tool tips" might be appropriate (like many browsers handle 'title' elements), possibly activated by a click/press-hold-and-gesture; sometimes a persistent frame--requiring explicit dismissal--might be used; sometimes an insertion into text might be appropriate. (Notes for a Shakespearean play or a poem by Alexander Pope might explain an archaic term or non-obvious allusion, a point of textual analysis, or a reminder of further information being available. Good versions tend to place the first type of note as side notes matched to the line--so they can be easily referenced without disrupting the flow of reading--and the middle kind as foot notes. The last kind of note might be best suited to an end note or be included in an opening or closing commentary.)</p><p>It does seem that a dynamic and intelligent display offers significant opportunities for improvement over paper-based information storage and presentation.</p>Paul A. Claytonhttp://www.blogger.com/profile/04389349203483308643noreply@blogger.com0tag:blogger.com,1999:blog-8558990029928903827.post-25111665708768461312011-12-08T07:06:00.000-05:002011-12-09T13:59:16.292-05:00A brief introduction<p>The title of this blog comes from the Peter, Paul and Mary song "Right Field"; it seems a better title than "Just a Technophile" since it is more playful (while still being sufficiently self-deprecating) and recognizes that not all the content will be computer-related (it also provides a better url). The "and other fancy stuff" in the description comes from "Puff the Magic Dragon" (and recognizes that what I am enthusiastic about may be as odd to most people as what a little boy finds 'fancy' compared to an adult--I also thought it somewhat clever!).</p><p>The content of this blog is expected to have a heavy emphasis on computer architecture, but I expect to post a few thoughts on Christianity and other topics. I hope to encourage myself to write more freely here than I might on the <a href="http://groups.google.com/group/comp.arch">comp.arch</a> USENET group or the <a href="http://www.realworldtech.com/forums/index.cfm?action=list&roomid=2">Real World Technologies forum</a> (or elsewhere), particularly in posting raw ideas and, of course, thoughts that would be off-topic in those fora.</p><p>[Edit: I forgot to mention that "out of right field" in the description is intentional--not only referring to the song but also indicating that my comments may be even odder than those from "out of left field", perhaps even with a hint that they may sometimes be correct ("right")--"and the baseball falls into my glove".]</p>Paul A. Claytonhttp://www.blogger.com/profile/04389349203483308643noreply@blogger.com0