Legendary guitar effects: The Maestro Fuzz-Tone FZ-1, FZ-1A and FZ-1B
Posted 2007-07-21 in Effects by Johann.
Dirty, distorted guitar sounds pretty much started in 1962 with the Maestro Fuzz-Tone.
Maestro Fuzz-Tone FZ-1
Used to record the single (I can’t get no) satisfaction by the Rolling Stones, the FZ-1 was the first distortion effect (“Fuzztone”) on a hit record.
Ironically, it didn’t sell at first:
The marketing and sales people were optimistic about the prospects of selling a lot of Fuzz-Tone effects and produced over 5000 that first year. Gibson's dealers bought 5458 pedals during 1962 confirming Gibson's sales forecast. Unfortunately, the buying public didn't buy all those pedals from the dealers as expected.
- Maestro Fuzz Tone FZ-1 sound clip (MP3, 2.4 MB)
- Maestro Fuzz Tone FZ-1 sound clip (Ogg Vorbis, 1.6 MB)
Maestro Fuzz-Tone FZ-1A
The FZ-1A circuit is pretty much the FZ-1 with some minor changes. For example, it runs on one 1.5 V battery instead of two like the FZ-1.
I think it sounds more like “satisfaction” though.
- Maestro Fuzz Tone FZ-1A sound clip (MP3, 3.2 MB)
- Maestro Fuzz Tone FZ-1A sound clip (Ogg Vorbis, 2.1 MB)
Maestro Fuzz-Tone FZ-1B
Contrary to the other three-transistor Fuzz-Tones, the FZ-1B is a two-transistor circuit running at 9 V. It was designed by Bob Moog from what I know. It’s also not using Germanium transistors like the FZ-1 and the FZ-1A.
The sound is closer to other two-transistor fuzzes, like the Fuzz Face.
- Maestro Fuzz Tone FZ-1B sound clip (MP3, 1.8 MB)
- Maestro Fuzz Tone FZ-1B sound clip (Ogg Vorbis, 1.1 MB)
10 comments
An introduction to blocking spambots and bad bots
Posted 2007-07-19 in Spam by Johann.
This post explains how to prevent robots from accessing your website.
Why certain robots should be excluded
Not all software on the net is used for activities beneficial to you. Bots are used to
- harvest email address (which are then sold and spammed),
- scrape websites (stealing content),
- scan for unlicensed content (MP3 files, movies),
- spam your blog.
Good bots
robots.txt
is a file that specifies how robots may interact with your website. robots.txt
is always placed at the root folder of a domain (https://johannburkard.de/robots.txt
). If a bot respects robots.txt
, there really is no need to take any of the measures described below.
Bots from the large search engines respect robots.txt
. Some bots used for research also do, such as IRL Bot.
To prevent a bot that respects robots.txt
from crawling a certain area of your website, add an entry to your robots.txt
file.
User-agent: <bot name> Disallow: /private/stuff
Don’t forget to validate your robots.txt
.
Bad bots
Bots that don’t respect robots.txt
are bad bots. There are different strategies how they can be blocked.
By user agent
In many cases, bad bots can be addressed by their user agent – a unique identification string sent together with requests. To block a bot based on it’s user agent, configure your web server accordingly (.htaccess
for Apache, lighttpd.conf
for lighttpd).
In this example, I configure my web server to block all requests made by user agents containing PleaseCrawl/1.0 (+http://www.pleasefeed.com; FreeBSD)
and Fetch API
:
$HTTP["useragent"] =~ "(…|PleaseCrawl|Fetch API|…)" { url.access-deny = ( "" ) }
Examples where this strategy can be effective:
- Known bad bots.
- HTTP programming libraries such as
lwp-trivial
orSnoopy
. - Spam software that contains small hints in the user agent string, such as
Bork-edition
.
By IP address
Companies that perform stealth crawling mostly operate from their own netblocks. Spammers or scrapers also might have their own IP addresses. You can block these addresses on your firewall or in your webserver configuration.
Example:
# Symantec $HTTP["remoteip"] == "65.88.178.0/24" { url.access-deny = ( "" ) }
Examples where this strategy can be effective:
- “Brand monitoring” and “rights management” companies such as Cyveillance, NameProtect, BayTSP.
- Spam-friendly hosters, such as Layered Technologies.
By behavior
Blocking bots by their behavior is probably the most effective but also the most complicated strategy. I admit I don’t know a software for behavior-based blocking. The Bad Behavior software only does this on a per-request basis.
The idea is that multiple requests from one IP address are analyzed. To be effective, this analysis must be performed in real-time or near-real-time.
Some factors that could be analyzed:
- Entry point into the site.
- Refering page. Accessing a page deep in a site without a referer is somewhat suspiciuos – some browsers can be configured to turn off the referer however.
- Which referer is passed on to scripts and images.
- Loading of images, CSS or scripts. Bots rarely request the same files real browsers do.
- Time intervals between requests. Loading multiple pages in very short (or very long) intervals is usually a sign of bots.
- Deviation from request intervals. Human visitors rarely request one page every 10 seconds.
- Incorrect URLs or invalid requests.
MAC address lookup using Java
Posted 2007-07-09 in Java by Johann.
Code to extract the MAC address of the network card has been in UUID for a long time now. Because some people download UUID for just the MAC address part, here is a short explanation of how that code works and how you can use it in your projects.
First, download UUID and open com.eaio.uuid.UUIDGen
. The MAC address extraction is performed in the static initializer.
The code does a little bit of OS sniffing to find a suitable program which is then used to print the MAC address. The output of this program is read in and parsed.
Operating systems
HP-UX
On HP-UX, /usr/sbin/lanscan
is started.
Linux/MacOSX/possibly other Unices
The /sbin/ifconfig
command is used.
Windows
ipconfig /all
is used.
Solaris
On Solaris, the first line of the host name list is used to call /usr/sbin/arp
. The code is somewhat equivalent to /usr/sbin/arp `uname -n | head -1`
.
Ripping the MAC code
- Download the latest version of the UUID software and extract the archive.
- Copy the following source files from the
uuid-x.x.x/src/java
folder into your project:com/eaio/util/lang/Hex.java
com/eaio/uuid/MACAddressParser.java
com/eaio/uuid/UUIDGen.java
- Copy the static initializer up to and including
if (macAddress != null) { if (macAddress.indexOf(':') != -1) { clockSeqAndNode |= Hex.parseLong(macAddress); } else if (macAddress.startsWith("0x")) { clockSeqAndNode |= Hex.parseLong(macAddress.substring(2)); } }
into a new class. - Retrieve the MAC address as
clockSeqAndNode
variable.
That’s all.
The top 10 spam bot user agents you MUST block. NOW.
Posted 2007-06-25 in Spam by Johann.
Spambots and badly behaving bots seem to be all the rage this year. While it will be very hard to block all of them, you can do a lot to keep most of the comment spammers away from your blog and scrapers from harvesting your site.
In this entry, I assume you know how to read regular expressions. Note that I randomly mix comment spambots, scrapers and email harvesters.
User agent strings to block
""
. That’s right. An empty user agent. If someone can’t be arsed to set a user-agent, why should you serve him anything?^Java
. Not necessarily anything containingJava
but user agents starting withJava
.^Jakarta
. Don’t ask. Just block.User-Agent
. User agents containingUser-Agent
are most likely spambots operating from Layered Technologies’s network – which you should block as well.compatible ;
. Note the extra space. Email harvester."Mozilla"
. Only the stringMozilla
of course.libwww
,lwp-trivial
,curl
,PHP/
,urllib
,GT::WWW
,Snoopy
,MFC_Tear_Sample
,HTTP::Lite
,PHPCrawl
,URI::Fetch
,Zend_Http_Client
,http client
,PECL::HTTP
. These are all HTTP libraries I DO NOT WANT.panscient.com
. Who would forget panscient.com in a list of bad bots?IBM EVV
,Bork-edition
,Fetch API Request
.[A-Z][a-z]{3,} [a-z]{4,} [a-z]{4,}
. This matches nonsense user-agents such asZobv zkjgws pzjngq
. Most of these originate from layeredtech.com. Did I mention you should block Layered Technologies?WEP Search
,Wells Search II
,Missigua Locator
,ISC Systems iRc Search 2.1
,Microsoft URL Control
,Indy Library
. Oldies but goldies. Sort of.
More stuff to potentially block
You might also block some or all of the following user agents.
Nutch
larbin
heritrix
ia_archiver
These are coming from open source search engines or crawlers. Or the Internet Archive. I’m not seeing any benefit to my site so I block them as well.
Links
The Project Honeypot statistics are usually a good place to keep an eye on.
- Harvester User Agents | Project Honey Pot
- Top Web Robots | Comment Spammer Agents | Project Honey Pot
- Behind the Scenes with Apache's .htaccess – good resource for the Apache users (I know they exist).
6 comments
Pages
Page 16 · Page 17 · Page 18 · Page 19 · Page 20 · Page 21 · Page 22 · Next Page »
Subscribe
RSS 2.0, Atom or subscribe by Email.
Top Posts
- DynaCloud - a dynamic JavaScript tag/keyword cloud with jQuery
- 6 fast jQuery Tips: More basic Snippets
- xslt.js version 3.2 released
- xslt.js version 3.0 released XML XSLT now with jQuery plugin
- Forum Scanners - prevent forum abuse
- Automate JavaScript compression with YUI Compressor and /packer/