Archive for October, 2010

The Organic-Produce-Version of Star Wars

Store Wars is a film that I first ran across several years ago on the Atom Films website. This fun little Star Wars parody recently reared it’s head again for me on the Open Source Movies website; it is well done and is worth watching if you have not yet seen it.

Speaking of Star Wars, most everyone is surely familiar with the ASCII movie version of SW Episode IV that was created some time ago by Simon Jansen, Sten Spans, and Mike Edwards.  If not, open your terminal and type:

$ telnet

Comments Were Broken But Now Fixed, I Think…

Sorry if you have left a comment on this blog only to find that it never made it through moderation.  In an effort to stop receiving automated spam, I’ve been using reCaptcha for some time now.  Apparently it has not been working properly because comments have not been coming through (thanks Brooke for bringing this to my attention); perhaps it was a conflict with my WP theme.  Comments should be able to make it into moderation now.  Thanks for your patience.

The Management

Dark Themes for OpenBox

Funny how time seems to just fly by.  Below is a list of themes for OpenBox that I created nearly a year ago and I’ve been meaning to post them on all this time.  The one called “Studio-2” is the theme that I primarily use on a daily basis, and “Sage” comes in as my second favorite.  The other themes are not ones that I use much (if ever…), but perhaps someone might like them.


Download Here:


Download Here:


Download Here:


Download Here:


Download Here:


Download Here:


Download Here:

Image Downloader v.2 – Download Linked Images Quickly and Easily From a Webpage

A request has been made to make a more simplified version of the imageDownloader.  This has prompted the release of imageDownloader-v.2.

Version 1:

Version 1 provided various options: (1) print a list of hyperlinks contained within a web page and dump them into a text file; (2) download images that a web page is linking to; (3) prompt the user to create a new directory (which is where it would save the files to) for either of the two previously mentioned options so that the user could better control where files were being saved to if running through the program numerous times before exiting.

Version 2:

Some folks, myself included, would like to bypass the majority of the choices and just download images to the current working directory.  The idea being that you navigate to where you want to go in your directory tree, create a new directory if desired, cd to that directory, and execute imageDownloader from there.  Once the images have finished downloading, the program exits and the user is returned to a command prompt.  Simple.  Easy.  Fast.

If you use figlet, you can uncomment lines 19 and 57 and either delete or comment out lines 20 and 58 to give you a finishing output that looks like this:


v.2 imageDownloader
v.1 imageDownloader

As stated in the previous post, you’ll want to make sure that the file is executable, and you may also wish to make this globally executable by copying an executable version of the file to your /usr/bin/ directory; this will allow you to call the program from any directory within your file tree.

Script for Downloading Images and Links From a Web Page

There are occasions when an individual might wish to download any or all of the images that may be linked from a web page, such as when there is a thumbnail image that is linked to a larger version of the same image (view an example of one such page).  Perhaps too, an individual might wish to obtain a list of all hyperlinks that are referenced in a web page.  After running across Guillermo Garron’s article where he provides some creative commands that will allow you to perform the two tasks listed above, I decided that it would be fun to write a script that executes all of this for you.  My Bash script is called “imageDownloader“, although in addition to downloading images, it will also create a text file containing all of the hyperlinks that are referenced from an html page.  Please note that the images that are downloaded are not the actual images that are displayed on the web page, but are the images that the page links to.

Upon executing the script, the user is welcomed with a short message that explains what the program does, and gives the user a series of choices:

This program will allow you to do one of the following:
(1) List all hyperlinks referenced in a web page and store the list in a text file
(2) Download all images that are hyperlinked from a web page,
    such as when you would click on a thumbnail image
    in order to view a larger version of the same image.

This script relies on the program called "lynx", so if you don't already have it installed,
you may want to quit (q) now and install "lynx".

What would you like to do?
Enter "1" to download a list of hyperlinks, "2" to download images that this page links to, or "q" to QUIT:

So, as requested, enter the appropriate choice that most suits your needs, and make sure that you already have “lynx” installed.  Entering either option 1 or 2 will prompt you to enter the desired URL.  It is helpful if you are using a terminal emulator that allows for copy/paste editing; my personal favorite is Terminator, which incidentally allows you to split your terminal screen into multiple panes.  You will then be asked to enter a directory name where you wish to either save your text file containing a list of hyperlinks or the location for your images that will be downloaded and saved, and then it begins working its magic.  You’ll have the option to start over or quit the program at the end.

Note: This was a fun learning opportunity for me and although the concepts used here are not overly difficult, it was still a fun learning experience.  For those who are more experienced coders, if you see that there are places where I could improve my coding practices, please feel free to send me your suggestions and upgrades for this little program.

You can download or view imageDownloader script here, or follow the process outlined below.  You might save it without the “.txt” file extension if you like, as I added this to make it viewable from the comfort of your web browser.  Remember to make the file executable before running it.

$ wget
$ mv imageDownloader.txt imageDownloader
$ chmod 777 imageDownloader
$ ./imageDownloader

When prompted to enter a URL, you might like to try using the example page that I used above for downloading images (copy/paste):

Happy downloading!

Return top

-==[ Hilltop_Yodeler ]==-

Welcome to HilltopYodeler, a place where we'll do some hollerin' about Linux, OSS/FOSS, CSS/XHTML, pickin', paddlin', tinkering, snow, rock, bicycles, and other stuff that we're freaky for. Much of what will be discussed here will be related to Ubuntu Linux, Debian Linux, Crunchbang (#!) Linux, Damn Small Linux, OpenBox, PekWM, and Gnome. Grab your coffee... pick up your piolet... tuck in your whiskey nipper... have paddle in hand... grease your boards... bend some wires... plug into your lappie, mow down some sushi... and get your fool-freak yodel on!