blog.humaneguitarist.org

discoveries in digital audio, music notation, and information encoding

three by Rafa and the greatest quote ever

leave a comment

I've recently finished reading three novels by Rafael Sabatini.

They happen to be the basis for three of my favorite films as well. That, of course, is why I read them.

Starting with Captain Blood – Colonel, darling – I moved on to The Sea-Hawk and Scaramouche – the same order in which the 1935, 1940, and 1952 films were respectively made but not in the order of publication.

Anyone who's read the novels and seen the films knows that the Errol Flynn iteration of The Sea Hawk has nothing to do with the book. Still, it's a great film with a great duel. Better – if memory serves – than the duel in Caption Blood though perhaps not as fine as the duel in Scaramouche with Stewart Granger. That duel – again, if memory serves – is argued by more than one, if not many, to be the finest in cinematic history.

The justly famous eight-minute climactic duel in the theatre between Granger and Mel Ferrer (as Noel, the Marquis de Maynes) required eight weeks of training in which both stars had to memorize eighty-seven individual sword passes and perform twenty-eight stunts.

source: Scaramouche (1952) – Articles – TCM.com. Retrieved June 28, 2014, from www.tcm.com/tcmdb/title/2053/Scaramouche/articles.html

Naturally, both Flynn films are all the more majestic thanks to one Erich Wolfang Korngold.

Not since I read Lloyd Alexander's Chronicles of Prydain has reading been so laden with a sense of adventure. And while the Prydain books have depth that I could online glimpse as a youth, Sabatini's work has truth and wisdom. This, despite the fact that the protagonists in this unofficial trilogy appear on the outset to be men who are lived by life, simply making the most of the unfortunate hands they've been dealt, or as Flynn's Blood muses, "Desperate men, we go to seek a desperate fortune."

As epic as those words are, they don't match the words apparently spoken by Flynn's Flynn …

"I intend to live the first half of my life. I don't care about the rest." – Errol Flynn.

--------------

Related Content:

Written by nitin

June 28th, 2014 at 12:34 pm

goodbye to the Guitar Foundation of America

leave a comment

My GFA membership has expired and I won't be renewing.

Despite all the "great" benefits membership entails (see below), the Soundboard has been in a steady state of decline since they long ago stopped accepting letters to the editor. Its latest incarnation – embarrassing content aside – is less a journal and more a website printed on glossy paper. As a member I'd liked to have known what the printing costs are for doing that as opposed to before – maybe it's a better deal in terms of print production, but it's just another way in which I felt, more and more, like a customer than a real member.

This is a reminder that your Guitar Foundation of America membership has expired.  We hope you will take a moment to renew your membership for another year to ensure that you will continue to enjoy all of the great GFA benefits including Soundboard magazine, discounts at retailers such as Mel Bay Publications, Editions Doberman-Yppan, Guitar Solo Publications, Guitar Salon International, Colorado Case Company, a discount on Acoustic Guitar magazine subscription, and up to a 60% discount on instrument insurance through Clarion & Associates, Inc.

Members are involved in their organizations growth – or at least have the opportunity to do so. The Soundboard doesn't lend itself to discussion, the website seems to have virtually no discussions, the Soundboard Scholar was supposed to be out a year ago and yet there appears to be nothing going on or at least nothing being communicated on the website, and whatever happened to the old forums that were on the old Drupal site? As a member I should be a co-owner of that discussion content. I don't really see that the organization has any digital strategy let alone a print one (well, at least not a good one in my opinion).

It's so easy these days to do online surveys, but as a member I don't recall ever being asked my opinion on anything. Curiously though, I was asked to rewrite this post to be less "negative". I didn't. That, boys and girls, is what we call a "red flag".

Anyway …

I could go on but there's no point.

The music will live and that's what matters to me. The GFA no longer does.

a postscript: I wrote this post on April 6, 2014 and only published it today, April 29. The reason being is that after reading my words I thought it only fair to first inquire with the GFA about the status of Soundboard Scholar. So I sent an email with the subject "Soundboard Scholar" (copied below) on April 6th. It's been over three weeks with no reply and the webpage, as of this date, still says "Summer 2013" as the launch date.

Perhaps the email address isn't even being checked any more. So like I said before … I could go on but there's no point.

nitin arora 	 Sun, Apr 6, 2014 at 10:29 AM
To: peerreview@guitarfoundation.org

Hello,

It's been a year since this was supposed to first come out.

Is there any progress? Can something at least be posted here:

http://www.guitarfoundation.org/?page=SBScholarly

for the sake of some transparency?

thanks,

-- 
Nitin Arora
nitaro74 (at) gmail (dot) com
"Hope always, expect never."
--------------

Related Content:

Written by nitin

April 29th, 2014 at 5:00 pm

Posted in music,opinion

Tagged with , , ,

Four fur Marv: some new old music for solo guitar

leave a comment

This is a just a short post to provide a link to some new compositions for solo guitar I completed this week – after starting in September 2012.

While I don't expect anyone but myself to take the time to learn these properly, I do know a few folks that I think would at least give them a read through.

If I had to provide some overview of them it would be that they are written in a 19th century/Romantic style and hopefully provide enough variance in texture to provide interest from piece to piece. While these are a set, each piece is certainly independent of the other although the cadential endings of the first and last piece share some deliberate similarities – a sort of tying up of loose ends.

I might, at some point, post a brief analysis of some of the harmonic structure of the last piece, which at times provided me with quite a headache, but ultimately I let the music go where it wanted to. I simply needed to understand how it was doing so and had, in particular, to know this is order to try and provide coherent spellings of accidentals/enharmonics, etc.

Below are links to the PDF of the music and the textual "liner notes" as well as a ZIP archive with the PDF, liner notes, and source LilyPond files, etc.

Oh, and by the way, the work is dedicated to a friend of mine. He happens to be a cat. If kitty-inspiration is seemingly good enough for Domenico Scarlatti, it's surely good enough for me.

PDF: http://humaneguitarist.org/four_fur_marv.pdf

ZIP: http://humaneguitarist.org/four_fur_marv.zip

--------------

Related Content:

Written by nitin

March 29th, 2014 at 11:20 am

Posted in music,news

Tagged with , , , ,

trying to do a better job of image security

leave a comment

Just a quick post.

I've been thinking of image security lately, within the context on reading ebooks online.

Nothing online's going to be totally safe, but I have been thinking of better things I can do to protect an image on a website.

I've seen some sites that use image servers to protect access to the direct image, but if the image is called via the <img> tag, then all one has to do is use their browser's "Save web page complete" function.

I haven't investigated why, but calling an image via CSS' background-image property doesn't result in the image being downloaded via the browser.

I found a nice tutorial on "shrink wrapping" an image at http://skinnyartist.com/how-to-shrink-wrap-your-images/.

On top of that I used an image proxy script to read and return the data for a given referring URL only and used htaccess to block ALL HTTP requests to images within a given folder.

Here's the link to the "demo": http://blog.humaneguitarist.org/uploads/simijh/index.php. If you can download the image by hook or crook, I'd appreciate a comment below on how it was done. So far, using just Firefox, I can go to "View Page Info>Media>Save As" and get it although that's hopefully a bit of a pain and, therefore, a deterrent.

The PHP image proxy script and .htaccess file codes are below.

image.php

<?php

function return_image($image_url, $referring_url, $url_prefix="", $fallback_image="") {
  /* Takes an image located at ($url_prefix + $image_url) and returns the image data provided the
     HTTP_REFERER is equal to $referring_url.
    
    If the image does not exist it will fallback to the $fallback_image.
    
    For the basic code related to proxying data in this way, see: "http://www.php.net/manual/en/function.fpassthru.php".
  */

  // restrict access to image to $referring_url only.
  if ($_SERVER["HTTP_REFERER"] != $referring_url) {
    echo "You aren't allowed to see this image directly.";
    exit;
  }

  $image_url = $url_prefix . $image_url;
  $binary = Null;
 
  // open the file only for .jpg. .gif, and .png files.
  if (stripos($image_url, ".jpg") == True
    || stripos($image_url, ".gif") == True
    || stripos($image_url, ".png") == True) {
    $binary = @fopen($image_url, "rb");
  }

  // use the fallback image if opening the file failed.
  if (!$binary) {
    $image_url = $fallback_image;
    $binary = fopen($image_url, "rb");
  }
 
  // set the MIME type; send the image; stop the script.
  $extension = substr($image_url, -3); //will not work with extensions over 3 characters: i.e. "jpeg".
  header("Content-Type: image/$extension");
  fpassthru($binary);
  exit;
}

// execute return_image().
if (isset($_GET["q"])) {
  return_image($_GET["q"], "http://blog.humaneguitarist.org/uploads/simijh/index.php", "", "");
}

?>

.htaccess

<FilesMatch "\.(?:jpg|gif|png)$">
Order allow,deny
Deny from all
</FilesMatch>

    Written by nitin

    February 8th, 2014 at 10:37 am

    metadata is like the Muppets and Italian film

    leave a comment

    A couple of years ago, I was – against the better judgment of a friend – asked to give a guest class lecture at NC Central University on OAI, metadata harvesting, etc. for her metadata class at their school of library and information sciences.

    Apparently, it went badly enough that I was asked the next year to try and get it right by that year's instructor.

    I'm not sure if I did get it right after all, but I wanted to post the presentation slides all the same. They're here.

    It's a little silly – using a Muppet motif throughout, but I think my main points got across. I think people are much more likely to get something if looked at in real-world terms (as if the Muppets are real). A little sense of humor doesn't hurt either.

    I'd like to say that the "splash" page, a movie poster for Michelangelo Antonioni's film Blow Up, is the best part. It's all downhill from there, no doubt.

    To me, the film made no "sense" until the last, surreal scene of people playing tennis without a ball put it all together for me. It was as if the entire film was a Subject and a string of Adverbs until the last scene, the Verb.

    I let the class know this … and asked them to remember that while putting up with my presentation.

    Blow Up movie poster

    --------------

    Related Content:

    Written by nitin

    January 11th, 2014 at 11:20 am

    Monster Serials: listening to old radio plays while working on music notation

    leave a comment

    There are a lot of things I can do while having music, mostly classical, on in the background.

    Working on music notation is not one of them. I don't mean composing the notes themselves – for that I need silence, but rather entering notes and performance indications into notation software.

    So instead I like to listen to old time radio plays. Lately, I've had one of the following two playing in the background while I work:

    Both are hosted on Archive.org.

    Interestingly, the actor who portrays the monster in the Frankenstein piece mispronounces "Frankenstein" as "Franken-steen" initially, it seems. I had thought this was a subtle way of showing the monster getting smarter as he later learns to pronounce it correctly, but I believe some of the other actors made the same error here and there. So it might likely be just from the reality of reading a script and recording with limited time for checking for continuity errors.

    I'll also mention that in trying to come up with a clever title (a play on Monster Cereals – get it?) I learned about Franken Berry Stool.

    That is all.

    --------------

    Related Content:

    Written by nitin

    January 4th, 2014 at 1:06 pm

    humaneguitarist.org enters the 21st century and some background information

    leave a comment

    I started my site (humaneguitarist.org) around 1996 after I thought it would be a good idea to share what I learned with regard to classical guitar posture.

    After some major surgery in 1995, I tried to resume my studies with Christopher Berg at the University of South Carolina but I was simply too weak to do it. The telling moment came when I was in a practice room and dropped some sheet music. When I bent my knees to get down to pick the papers up, I was barely able to get back up.

    On top of that, my back started to hurt like hell from practicing. I was using a footstool at the time because I found the A-Frame guitar support – widely used within our guitar program at the time – to be far too unstable for my tastes. The combination of a footstool and not being physically up to the task made for the perfect combination for a bad back. I'd wake up each morning feeling somewhat OK, but after a couple hours of practicing the pain became too great to continue.

    Being too young to drink, I hit the books instead of the bottle.

    And that's when the research that gave birth to this website began.

    After some 17 years, I finally updated the HTML code … I don't even think CSS was around when I first created the site.

    Now, of course it still looks the same but the code is much cleaner and easier to read. Mostly, it's not as annoying to me personally to know that it was messy under the hood.

    Anyway, that's my story and I'm sticking to it.

    Happy New Year.

    Update, later in the day: Actually, thinking back the "humaneguitarist.org" domain probably goes back to only 1999. The main content has been online since about 1996 using America Online, GeoCities, tripod.com and whatever else until I got sick of changing URLs and finally started using my own domain name.

    --------------

    Related Content:

    Written by nitin

    December 31st, 2013 at 12:40 pm

    Posted in music,news

    Tagged with , , , , ,

    discourteous accidentals

    leave a comment

    Earlier today I was proofreading the four pieces I'm writing to make sure I place courtesy accidentals where appropriate – as in where I deem them appropriate.

    As the name implies, courtesy accidentals are just, well, a courtesy to the performer and technically aren't necessary.

    In more chromatic sections, I think their importance rises. But I also think one has to try and combat instances where a musician might legitimately not know what to play – perhaps because the altered note is more ornamental in nature – or where that person might not know enough to make an informed decision as to what to play if there is a slight hint of ambiguiity.

    In the excerpt below I was about to place a courtesy natural sign on the "g" in the bass voice of measure 3 as the "g" in measure 2 was a "g-sharp".

    But then I realized I had already stated it was "g-natural" by specifying it would be played as an open string – considering that I didn't specify an alternate tuning. There are actually a few instances throughout the pieces where I think the fingering or implied fret position makes it clear what to play without the need for courtesy accidentals.

    Of course, one could argue that I should place a courtesy accidental on the "f-natural" in measure 2 given that it was an "f-sharp" in measure one – albeit in a different voice.

    But the guitarists in the audience already knew the correct note to play … right?

    wink

    discourteous accidentals, musical excerpt

      Written by nitin

      December 28th, 2013 at 1:23 pm

      LilyPond: explicit initial durations for a second voice and thanks to Frescobaldi

      leave a comment

      Happy Holidays, humbug, and stuff.

      I'm in a coffee shop working on finalizing some pitches and durations for some pieces I'm working on for solo guitar. I'm hoping I can have the fingerings and dynamics all punched in by the end of the year – as I've got a nice amount of vacation built into the school year (one of the best advantages of working in academia). I started composing these last September (2012) so I'm behind schedule in that I wanted to be all done by September, 2013. Good intentions and all.

      Anyway, I'm using the mighty Frescobaldi editor to make Lilypond scores after I'd done the initial notation in MuseScore and exported it to LilyPond format. And yesterday I was struggling with something I'd never come across before.

      Basically I had thought that the first note defaults to a quarter note if no duration is specified. According to the LilyPond documentation here, that is indeed the case:

      If the duration is omitted, it is set to the previously entered duration. The default for the first note is a quarter note.

      The last measure of my first voice part ended with a half-note rest and the first note of my second voice part started with a quarter note, which I failed to explicitly notate because I thought a quarter note was the default per voice. My bad.

      Well, it seems that the first note of the second voice was actually inheriting the half note from the end of the first voice so after getting a bit frustrated as to why my score was compiling all wonkily I stumbled upon the need to set the duration for the first note of the second voice explicitly as below.

        <g e' c'>4 r <b f' g> r |
        <g c e>2\fermata r\bar "|." |
      }
      
      % voice 2.
      basso = \relative c {
        \voiceTwo
        \stemDown
        \set fingeringOrientations = #'(left)
      
        c4 e r e | % 1
      --------------

      Related Content:

      Written by nitin

      December 24th, 2013 at 10:55 am

      search and auto-complete suggestions with a little Solr and lots of SQLite

      leave a comment

      Over the last few days, I've been working on how to throw back search and auto-complete suggestions against a Solr index for an eBook project at work.

      The idea is simple: a user starts typing and matches appear below the search box as the user types. The matches would "suggest" the best matching "title" and "author" values against the user query.

      Conceptually, it's simple and quite easy to do with an SQL database. Problem is, we've – for now at least – decided to use Solr because it offers a far better fulltext search experience versus anything else we currently have experience with. We're in a bit of a time crunch, so exploring other options isn't much of, well, an option. For now.

      Anyway, to do this I was just going to create a JSON API so our web developer could send the user's typed text to the API and get back the "best" matches. She's using typeahead.js in the mockup we have and it seems to be working well against a very small test index.

      I should say that typeahead.js seems to support pre-loading a JSON file and doing a good job of showing suggestions against that file. The problem is that I don't see that scaling to potentially thousands of items, nor will that allow the most relevant matches to appear at the top. Pre-loading the data seems to simply follow the order of the items in the JSON file and it's impossible to know what the "best" order is for one user at any one time, let alone all users.

      So, the API has to decide what to return and how to order it.

      Anyway, I wasn't getting far with Solr's own "suggest" method – it really didn't seem to be what we wanted.

      I also saw some tutorials that, honestly, I didn't have the time to explore and test with. The search-suggestion/auto-complete thing is, in the end, low priority. Very much unlike our deadline.

      cheeky

      One tutorial here that was easy to work with and implement was based on using facets to return what I call those "perfect" matches. These are matches that are the (user string + a post-fixed wildcard). In other words, if the user types "the cat" a perfect match would be "the cat in the hat (title)", etc. At the end of the tutorial, note the warning about "the load caused by the use of faceting mechanism."

      Anyway, the perfect matches alone wasn't really doing it for us; we also wanted relevant matches so that if one typed in "cat in the hat" the API would likely return "the cat in the hat (title)" because, while not a perfect match, it should have a high relevance score.

      Well, in moments of crisis I like to remind myself of a little thing called SQLite and in-memory databases.

      So, here what appears to be working. And it should scale, limited only by the speed of Solr but not directly by index size.

      What I did was simply set the API to query against the "title" and "author" fields in Solr (with post-fixed wildcards).

      I'm using the default of 10 rows for a search, so the maximum matches returned is 20, aka 10 * (# of fields). We can always bump that up if we think it's needed.

      So after Solr returns the results, we have 20 total field values.

      Those are then fed into an in-memory SQLite database.

      So if the user typed in "the cat in th" the query would be a union of a post-fixed wildcard search and a fulltext search (with SQLite's built in support for Porter stemming!) against the 20 values. For the fulltext search each word in the user query would be separated by " OR ".

      Since we need the results ranked by relevance, I hard-code the "perfect" matches to have a relevance rank of "99999" and then use the length of the "offsets()" function in SQLite to get the relevance for the remaining values.

      After SQLite is done, the query results are simply placed into an array, with the field value added parenthetically – i.e. "the cat in the hat (title)". Then the array is JSON encoded and shipped out the door. By the way, I'm also making all the values lowercase for reading ease and to normalize things like "JANE DOE" and "Jane Doe".

      p.s. just because 20 values are returned from Solr, that doesn't mean SQLite would return 20 results because many of those values are, of course, not perfect matches nor hits against the fulltext search. The remaining values could be returned as well, but so far I'm not thinking we want that as some of the results won't make sense to the user. That's to say if one types in "the cat in the hat" the value of "dr. suess (author)" could come back if we wanted it to as it would simply be the rest of the values obtained by adding another "UNION" clause in the database – i.e. a query that returns everything in the database. But the question is not "how?" but rather "why?".

      Anyway, when/if a user clicks on a suggestion, the suggestion will be fed to a "search" method in the API that will sniff the field value out of the parenthesis and return results by using an " AND " separated search against that field in the index.

      My only outstanding question at this point is whether I should do what I do now: insert each value into the SQLite database one at a time OR if I should build one big INSERT statement and insert all the data into SQLite in one fell swoop.

      Anyway the code is below. Ultimately the meat of the logic here is Solr agnostic, since any data store with key/value pairs could be made to work with this.

      Which is good because, as I initially stated, we're only using Solr out of time constraints not necessarily because we think it's the best way to go.

      <?php
      
      function return_suggestions($term, $min_len=2) {
        /* Returns JSON auto-complete suggestions against Solr index of "author" and "title" fields for $term.
           Suggestions are lowercase and contain the field name within parenthesis. */
       
        // force $term to be at least equal to $min_len.
        if (strlen($term) < $min_len) {
          exit;
        }
       
        // !!! these will eventually be handled by a commom library (query_tools.php).
        $term = trim($term);
        $term = str_replace("%20", " ", $term);
       
        // query Solr.
        include_once("includes/solr_tools.php");
        $params = "&q=author:$term*+OR+title:$term*&fl=author,title";
        $response = query_solr($params, "select", "+OR+");
       
        // get "docs" field from Solr.
        $docs = $response["response"]["docs"];
        //print_r($docs); //test line.
        if (!$docs) {
          exit;
        }
       
        // create SQLite database.
        $memory_db = null;
        $memory_db = new SQLite3(":memory:");
       
        // create table; you must use "VIRTUAL TABLE" for fulltext (FTS3/4); see: http://www.sqlite.org/fts3.html#section_1_2.
        $memory_db->exec("CREATE VIRTUAL TABLE box USING FTS4 (id INTEGER PRIMARY KEY AUTOINCREMENT, suggestion, suggestion_type, tokenize=porter)");
       
        // get the "suggestion" and the field it came from; insert values into database.
        $fields = (array_keys($docs[0])); //the fields are "author" and "title" because those are the ones requested per $params/URL query.
       
        foreach ($docs as $doc) {
          foreach ($fields as $field) {
            $suggestion_value =  strtolower($doc[$field]);
            $insert = "INSERT INTO box (suggestion, suggestion_type) VALUES (\"$suggestion_value\", \"$field\")";
            //echo $insert . "\n"; //test line.
            $stmt = $memory_db->exec($insert);
          }
        }
       
        // prepare for query; make database query; run query.
        $term_OR_separated = str_replace(" ", " OR ", $term); //for full text, need to separate words by "OR".
       
        $query = "SELECT suggestion, suggestion_type, 99999 as rank FROM box WHERE suggestion LIKE '$term%'" //"perfect" matches.
        . " UNION SELECT suggestion, suggestion_type, length(offsets(box)) as rank FROM box WHERE suggestion MATCH '$term_OR_separated'" //relevant matches.
        . " ORDER BY rank DESC";
        //echo $query; //test line.
       
        $results = $memory_db->query($query);
       
        // append matches to $suggestions.
        $suggestions = array();
       
        while ($row = $results->fetchArray()) {
          //print_r($row); //test line.
          $suggestion = $row["suggestion"] . " (" . $row["suggestion_type"] . ")"; //i.e. "jane doe (author)" instead of "jane doe".
          if (!in_array($suggestion , $suggestions)) {
            array_push($suggestions, $suggestion);
          }
        }
       
        $memory_db->close();
      
        // exit if $suggestions is empty.
        if (count($suggestions) < 1) {
          exit;
        }
       
        // write and output JSON.
        include_once("includes/make_json.php");
        $output = array("suggestions"=>$suggestions);
        echo make_json($output);
      }
      
      // execute return_suggestions().
      if (isset($_GET["q"])) {
        return_suggestions($_GET["q"]);
      }
      
      ?>

      Update, later in the day: So, I thought more about augmenting the query, but not to return all the remaining results but just ones for which the last word (or word fragment) entered by the user surrounded by wildcard characters yields something. Here's the part of the code I changed compared to that above (starting at line 47).

        // prepare for query; make database query; run query.
        $term_OR_separated = str_replace(" ", " OR ", $term); //for full text, need to separate words by "OR".
        $term_last = explode(" ", $term); //geting last word in query; see: http://stackoverflow.com/a/11029470.
        $term_last = $term_last[count($term_last)-1];
        //echo $term_last; //test line.
       
        $query = "SELECT suggestion, suggestion_type, 99999 as rank FROM box WHERE suggestion LIKE '$term%'" //"perfect" matches.
        . " UNION SELECT suggestion, suggestion_type, length(offsets(box)) as rank FROM box WHERE suggestion MATCH '$term_OR_separated'" //relevant matches.
        . " UNION SELECT suggestion, suggestion_type, 0 as rank FROM box WHERE suggestion LIKE '%$term_last%'" //wildcard against last word only.
        . " ORDER BY rank DESC";
        //echo $query; //test line.
       
        $results = $memory_db->query($query);
      --------------

      Related Content:

      Written by nitin

      December 7th, 2013 at 11:08 am