(504) 858-6358

This is too hilarious to not post.

AFO.net is an ISP and email provider that focuses on services for Christian believers. They specifically filter email and web content based on their family values.

Why is this funny?

Well… I emailed them… basically to just too see their email headers, and this is what I found:

Delivered-To: BLANK
Received: by with SMTP id e68csp842145vkg;
Thu, 4 Feb 2016 17:35:56 -0800 (PST)
X-Received: by with SMTP id 5mr5561037ybv.22.1454636156591;
Thu, 04 Feb 2016 17:35:56 -0800 (PST)
Received: from afomail.net (afomail.net. [])
by mx.google.com with ESMTPS id v193si7606667yba.49.2016.
(version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
Thu, 04 Feb 2016 17:35:56 -0800 (PST)
Received-SPF: pass (google.com: domain of sensley@afomail.net designates as permitted sender) client-ip=;
Authentication-Results: mx.google.com;
spf=pass (google.com: domain of sensley@afomail.net designates as permitted sender) smtp.mailfrom=sensley@afomail.net
Received: from afomail.net (localhost [])
by afomail.net (8.14.4/8.14.4) with ESMTP id u151ZtjO009692
(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO)
for ; Thu, 4 Feb 2016 19:35:55 -0600
Received: (from apache@localhost)
by afomail.net (8.14.4/8.14.4/Submit) id u151ZsxY009691;
Thu, 4 Feb 2016 19:35:54 -0600
Subject: Re: Contact Message from MYAFO by BLANK for Sales
X-PHP-Originating-Script: 501:rcube.php
MIME-Version: 1.0
Content-Type: multipart/alternative;
Date: Thu, 04 Feb 2016 19:35:54 -0600
From: Steve E
Organization: Cleanwww Inc - dba American Family Online
Reply-To: sensley@afomail.net
Mail-Reply-To: sensley@afomail.net
In-Reply-To: <201602050031.u150VUrL021047@afoconnect.com>
References: <201602050031.u150VUrL021047@afoconnect.com>
Message-ID: <766ce80eddf4eafe918a038c6eb2d236@afomail.net>
X-Sender: sensley@afomail.net
User-Agent: Roundcube Webmail/1.1.4
X-Scanned-By: MIMEDefang 2.78 on

Notice something…

MIMEDefang 2.78 is the milter they use. Good choice. But maybe I should tell them that the core software they use is written and created by a transgender women, Dianne Skoll. Its nice to know that AFO is open minded when it comes to saving money by using open-source software that is written by someone who is transgender. Progress…

Posted in 956-587-9364 | Leave a comment

How can grid-tie solar be bad for everyone?

This is slightly off-topic, but nonetheless falls into my wheel house of all things electrical.

Many people are aware of the concept of grid-tie solar systems, either DIY or through a company like SolarCity. Sidebar: Stay away from SolarCity, its far better to simply buy your own panels and install them vs having SolarCity do it on their dime and then be locked into a 20 year power pricing agreement.

Recently however, a crack has been found in the system that threatens its future as more and more people do it. Its real simple to understand the problem, let me explain.

Most people get a power bill that has 2 or 3 cost components. The largest is the power generation fee, this may be something like $0.08 per kilowatt-hour. Then second largest component is the transmission fee. This is also billed by the kilowatt-hour and may be something like $0.035 per kilowatt-hour. The third component is a flat customer or connection fee.

What do these bill components represent?

Generation fee is the cost of the actual electricity you use and thereby needs to be generated somewhere at a power plant. It varies nationwide. In the mid-atlantic region its around $0.07 to $0.09 per kilowatt-hour

Transmission fee is a pass through capital expense. From the standpoint of your local utility company, this fee seeks to re-coupe all costs associated with running a utility and delivering power to your house. So it covers capital costs like power lines, poles, vehicles, equipment, it covers wages for the lineman, gas for the utility trucks, etc.,.

Connection fee is a flat charge that mainly covers the cost of your meter and the cost to actually bill you via paper.

Its important to note that the transmission fee is tied to your metered usage. The more power you use, the more you “use” the utilities infrastructure, and thereby you pay more transmission fees.

Grid-Tie Solar basically takes advantage of the system, an economist would call this arbitrage.

With grid-tie, your solar panels invert DC power into AC house power, and that power feeds back into your meter. When your household usage is greater then the solar output, the solar output gets used up at the source in your electrical panel with the extra power demand coming in through your meter. Another way to view it, your solar panels “slow” down your meter during peak usage. Even better, at times of low household usage, if there is an excess of solar power it flows back into the grid, the grid is effectively used as a battery. Even better still, the local utility would credit the negative meter hours against your account for the month. End result is a house that pulls 800 kilowatt-hours a month could have a metered power bill of only 50 kilowatt-hours if the panels were properly sized.

Whats the problem then? Why is this arbitrage?

Here is the issue. That house that pulls 800 kilowatt-hours a month, even though its bill might only show 50 kilowatt-hours due to the grid-tie solar generation, there are many times through the month where the house needed 100% power demand from the grid (cloudy summer day, hot summer night, etc.,.). So if you extend that thinking out logically, that house needed 800 kilowatt-hour infrastructure even though its being billed as a 50 kilowatt-hour house.

Lets look at the numbers:

800 kwh x $0.035 transmission = $28/month

50 kwh x $0.035 transmission = $1.75/month

WOW… The utility company just lost a ton of money that it needs for capital infrastructure to support your house. And even though you are only being billed for 50 kilowatt-hours, your houses footprint in the utility universe really hasn’t changed from that of a 800 kilowatt-hour house.

How did this happen?

Well, the utilities back in the day were sort of forced to allow net-metering – this is the concept of allowing your meter to roll-back while excess solar power was fed back into the grid. Nobody thought that wide spread solar use would take off. Utilities are in a bind now, they are seeing a huge drop in transmission fee revenue, even though they have the same number of households and infrastructure to support.

Why is this bad for me?

Well, when utilities have a revenue shortfall the first thing they do is jack up the rate for everyone else to make up for that shortfall. This has already happened in Nevada. Nevada has the highest rate of grid-tie solar adoption, and it was starting to eat into the pockets of the utilities. Earlier this month, the Nevada PUC changed the net-metering rates to effectively discourage solar.

Is there hope?

I sure hope so. The utilities have a valid concern, the infrastructure needs to be paid for, the current system definitely favor the homeowner with a solar panel array. But… there are better ways to resolve the issue. One option is instead of net-metering, have two completely separate meters – basically your original “untouched” meter, then a second new meter that is wired directly between the outside service and your grid-tie solar panel. With this setup, the utility can still subtract your generated hours from your main usage meter for the “generation” portion of your bill, but still be able to charge the full usage on the transmission fee side.

Posted in (830) 229-1586 | 3257664809

Its a Winter Wonderland… Awful for Business Continuity

Anyone living in the Mid-Atlantic region is well aware of the recent weather conditions wreaking havoc on local residents and businesses. Just last week, PECO had over 600,000 power outages in the 5 county Philadelphia area. Many of these outages were businesses, and in some cases lasted over 4 days.

Can you imagine the amount of lost productivity for a 4-day total outage to a business?


We can. We see it everyday. Small to mid-sized businesses still heavily rely on local servers at their office to run critical apps for payroll and accounting, not to mention email. Email is easy to put offsite, but how do you add business continuity to office apps like Quickbooks? Remote Data Backup is NOT the solution. If your office is dark, having a copy of your data in the cloud does not help you get back up and running. What you need is true business continuity.

For most businesses, the simplest solution is using virtualization to replicate your core office servers and desktops to a remote a data center. The servers are replicated exactly, including all applications and data. The desktop environments are built to contain basic applications and custom apps that talk to the server to effectively replicate a working office environment. Everything is pre-built and put into standby mode. Live data is then backed up daily.

When an emergency event happens and the office goes offline, employees can make use of their home internet access and remotely connect to the virtualized desktop environments sitting at a safe and secure datacenter. From those desktop sessions, they can access replicated copies of their office file servers, office apps like Quickbooks or ACT, and even email.

A recent study shows that even during widespread power outages, 8 out of 10 office employees typical have power and internet at home even when their employers office goes down, and if they dont have power or internet they can be mobile and find a location that does.

What’s the alternative?

Add a generator to your office and some form of radio based 4G internet access. 4G internet will cost between $50-$100 per month plus a few hundred in startup cost for equipment. A generator will cost approximately $25,000 to $50,000 (depending on building load size) for most small to mid size business in the 2,000 to 10,000 sqft size range. That doesn’t include semi-annual maintenance and testing, and during a failure, extended run-time beyond 24 hours can be difficult.

Virtualizaton is much more affordable…

To virtualize a server and 5 desktops, you are looking at approximately $250-$300/month recurring with maybe $1000-$2000 in one-time setup fees. Thats a much better alternative to a generator install that may not even be that reliable. Better yet, this DR solution also acts as a data backup solution which most businesses need anyway.

Ditch the generator and virtualize your office for true business continuity and disaster recovery!



Posted in Uncategorized | 4797520856


In my 15 years of IT experience no term has annoyed me more then the “Cloud”. For starters, the term “Cloud” is a simplification of the idea of hosted services for non-technical decision makers. For the past 10 years or so businesses have been successfully using hosted services in many ways, what is changing is how that service is conceptualized and how it is marketed. At the same time it is also becoming less and less transparent.

Whats the difference between a company maintaining a hosted server environment in a datacenter vs a cloud services?

Effectively there is no difference. Many companies buy and maintain there own servers, place them in a secure datacenter, and achieve a stable hosted environment. But in a world of cost reduction, some companies pinch their IT budget and lose key staff and begin to outsource. This is where cloud services latched on. Cloud providers rush in and convince managers that outsourcing everything is the way to go, no hardware to buy, no large staff needed, just pay us to do it all for you for one flat fee.

Sounds great right?

The problem is the cloud provider needs to make money too, and since they are effectively running and deploying the same hardware you would use, they need to cut corners to make a profit. The easiest way to do this is using refurbished hardware and over subscribing it across multiple clients, and also running a more cost effective datacenter with less features. By cost effective datacenter, I really mean a “cheap” datacenter. You might say to yourself, how can a cloud provider get away with this? Simple. There is no transparency. 95% of cloud providers never disclose where their datacenter is or what its capabilities are. You cant go and see it for yourself. Because the “Cloud” solution is cleverly marketed, buyers forget to verify that the service is powered by an actual reliable network and facility. This happens whenever products are sold and re-packaged, you assume the provider takes on that responsibility and since they have an SLA in their contract, its not your problem. But it is your problem. Its still your data, your application. You need to know where it lives, you need to confirm the facility is redundant, has fire protection, has an aggressive and high speed network, uses top-of-line hardware not refurbished servers.

Cloud Services allow providers to effectively hide their operations from plain view. In the current environment of accountability, it is extremely important to make sure you know whats going on behind the scenes. At the end of the day if your cloud provider losses your data, yeah, its their fault, but guess what…. your data is still gone. Having someone to blame for a failure doesn’t make it any better. Why not try and avoid the failure from ever happening.




Posted in 248-636-9213 | Leave a comment


Customers may choose you, but its also important to choose who you want as a customers. For years, I have been telling people that one secret to success in the datacenter business is be picky when it comes to what kind of customers you decide to provide service to. This may go counter to popular ideas about business, that is, anyone willing to pay should be a customer of mine. That may hold true is you sell hamburgers, but when you sell (800) 753-8245 and IP backbone access, its very important that you stay away from certain types of customers.

Customers I traditionally avoid:

  1. Gamers
  2. Adult Sites
  3. Email Marketers
  4. Proxy Providers

Why should the above group be avoided? Lets look at the four factors that make a customer undesirable. First, can they pay their bills reliably. Second, will they be a long term customer. Third, will they require a lot of customer service. Fourth, will they impact your resources negatively.

Gamers fail three of the four key criteria, they are bad at paying bills, they are not long term clients, they dont require a lot of assistance, but they do negatively impact network utilization and are more prone to DDoS attacks. Adult Sites can be good payers and can be long term clients, but they also require a lot of assistance and have a negative impact, adult sites get DDoS attacks frequently, their bandwidth usage is highly volatile, and their actions of handling illegally copyrighted material cause major headaches and liabilities. Email Marketers fail all four criteria, they’re short term clients, are not good payers, abuse IP resources, and are strong DDoS targets because everybody hates spammers. Proxy Providers are interesting, they are good payers and long term clients, but they have a lot of baggage since proxy providers can’t control what the users behind the proxy do. In our experience, proxy providers cause a lot of headaches and generate a ton of abuse complaints from their proxy clients doing everything from sending spam to running botnet attacks.

Your are the company you keep….

The type of people you have in your datacenter should reflect who you are as a business. I run a very stable, reputable business, so as such, I prefer that my customers be stable, reputable businesses. In the long term, this philosophy has worked very well for me.

Posted in 5633875591 | 343-267-9442

Choosing a local VOIP Provider for your Business Phone

We recently switched over to a new VOIP provider for our phone services. In the past we were using one of those large national providers, service was okay, but it could be much better. Telecom is normally treated as a commodity, that is, if all the features are there its a matter of who has the lowest price. Yes, everyone has the same features, and prices can vary, but there are several intangible aspects that you cant easily identify.

We decided to use a 8303703378 who is in our area because we felt that having local access to the company would be beneficial. The provider, Essenz, Inc., is based in Lafayette Hill, PA – just 20 miles or so from our offices. Their business phone solution had all the features we were looking for and the price was very competitive. Best of all, the phones were personally delivered to our location and setup by a technician – this is free and included in their service. Because they are local, they also offer same day (4 hour response) hardware replacement if the phone fails.

These bonus features make all the difference, and trust me, if your phone dies, waiting 1-2 business day for a replacement to be shipped is not ideal. I encourage people to look locally first, you’ll be amazed how many great providers are right in your backyard.

Posted in probosciformed | Leave a comment

Influx of Colocation in Philadelphia post Hurricane Sandy

We have seen a dramatic increase in colocation activity in the Philadelphia area following Hurricane Sandy.  Sandy effectively knocked out several North Jersey and Lower Manhattan facilities for over a week.  The Whitehall St. facility in Lower Manhattan was without power for over a week.  Then there were datacenters that had power but lost IP connectivity when their circuits from Manhattan went down.

On top of all of this, a local Philadelphia facility (Voicenet) decided to shut down its colocation operations.  We have actually moved in over 5 clients just from Voicenet alone, and another 6 clients from various providers on the Metro NYC area.

What did we learn from Hurricane Sandy?

1. Don’t put your datacenter in a building with a below grade electrical room.

2. Don’t put your datacenter in an area without diverse IP POPs.

The first rule is obvious, or so you would think, but amazingly people in NYC build telecom operations in buildings that have below grade electrical rooms. In the case of Whitehall, not only was the electrical room below grade, but so were the fuel pumps for the generators.  The second rule is broken all the time.  I can’t tell you how many facilities claim to be multi-homed with multiple carriers, but when you look closely, you find out that all those carriers come in via a single fiber ring that runs to a single POP.

There was one datacenter in Boston that had fiber running to Whitehall St. in Lower Manhattan, so when Whitehall went dark, the facility in Boston lost all IP connectivity. Boston has local IP POPs, why get all your connectivity out of Whitehall in NYC? Answer is cost.  It’s cheaper to put everything on one big pipe and send it to a heavily trafficked POP like Whitehall, but like the old saying…. you get what you pay for.



Posted in Main | Leave a comment


Another local datacenter outage to report. Voicenet facility located in Northeast Philadelphia had a major network outage last night. Reports from a collegue of mine that manages equipment there indicates the outage lasted for about 45-60 minutes. The outage even took out Voicenet’s phone system so customers who have equipment in the Voicenet Datacenter couldn’t even call in and complain.

I cant say it enough… Dont colocate equipment in datacenters that dont have true diverse network connectivity. Voicenet claims redundant fiber, but its just a single fiber ring. Yes, a ring has redundant fiber and two paths to prevent a fiber breakage, but its still a single ring operated by a single entity, with an equipment SPOF (single point of failure) at the other end. The only thing a ring protects against is back-how digging up the street or a tree falling down. Datacenters needs to have true diverse fiber. That means separate fiber paths coming in via separate entrances, and the fiber itself must be owned and operated by separate entities with completely separate routing platforms.

The scary thing is there are serveral datacenters in the Delaware Valley that operate off fiber ring topologies. Stay away from these datacenters, its just an outage waiting to happen.

Posted in Main, (904) 863-0966 | 4053393761

(910) 459-7731

I hear this all the time. Most people move out of a datacenter because something bad happened, and its usually a major power failure that causes the most trouble. In this article, I am going to outline and analyze a power failure event that occurred at an unnamed facility. This is a true story.

About 2 years ago I fielded a call from someone who lost power at their current data center provider. In addition to being down, they also had some equipment failures (power supplies and some RAM went bad in a few systems). Their provider told them that nothing was wrong with the UPS, rather, it was an issue with the utility caused by a brown out. As soon as I heard this, I told the person that this explanation was completely bogus.

Lets recap the cardinal rules of a good UPS:

1. An online UPS setups should always provide clean line power regardless of supply.

2. If an online UPS fails, an auto-sync transformer bridges line power and utility within 1 Hz and no power is lost, only backup capability is lost.

And lets recap what you need to do in order to make sure the above rules always apply:

1. Check your batteries every 3 months.

2. Replace a battery as soon as its internal resistance rises by 10%

3. Replace a battery as soon as its 4 years old, even if its internal resistance is still within spec.

4. Provide suitable cooling to the UPS.


I cant stress enough how important batteries are. The entire UPS is built around the concept of having working batteries. Almost every line-effecting outage of a UPS is due to a battery problem. At Quonix, we use Liebert Series 300 UPS systems that have had inverter boards fail, induction coils burn out, and input filter short out, and we NEVER lost output line power. That’s why the Liebert’s cost so much, they are designed to handle failures, but it requires good batteries.

Getting back to the story about the brown out. Any UPS that experiences a brown out or any kind of dirty power, would immediately engage batteries in order to provide clean power while it activates the GENSET cut-over. This requires the UPS to run on batteries for 5-7 seconds. If the batteries cant hold, the UPS will drop offline into bypass mode and auto-sync to utility line power. Once a UPS goes into bypass and syncs to utility power it no longer provides power protection or line conditioning. So all the dirty power goes straight through. If power was lost, GENSET power now comes straight through. And when utility power returns, the GENSET cuts out causing another small blip. This is why the server power supplies and RAM went bad. The dirty, and possibly surging power came right through the UPS into the rack cabinet.

Many providers dont properly maintain their batteries. They just assume the batteries will last 4-5 years. Not the case. I’ve seen brand new battery cabinets have 1 battery go bad after as little as 1 year. Sometimes its just a random manufacturer defect. And in many cases, all it takes is 1 bad battery to foul the entire array.

Want to be sure if your provider is on top of things? Easy, just ask for a copy of their UPS and battery preventative maintenance contract. If they have one, and they should, it should be easy to fax or email you a copy. You can even request a battery report. At Quonix, the vendor we use for our battery maintenance sends us a detailed graphical report with the health of each battery – voltage, impedance, internal resistance, temperature, and age.

Posted in (509) 312-9601, 402-521-5989 | 1 Comment

Repairing Tate Access Floor Tiles

How to repair floor tiles?

For this article I am referring to the newer style of Tate access floor tiles. The newer style has a single piece of laminant that runs from edge to edge. The older style tiles had the laminant end about a quarter inch short of the edge with the remaining space filled with a black edging strip that frequently snapped off.

The new style is great, but over time the laminant will start to pull away, especially in data centers with low humidity. Its simple to repair.

The laminant is held in place by contact glue, similar to a kitchen countertop. Contact glue can be loosened and re-hardened with heat.

To reattach your Tate laminant get a standard clothing Iron – the kind with a non-stick bottom. Set the iron temperature to medium and turn off the steam. Obviously, do this repair work outside the datacenter. Place the iron on the tiles laminant surface and slowing move it around. The laminant surface needs to be heated for at least 2 minutes. After properly heated use a surface roller to apply even pressure over the top of the laminant and press it down hard to the tiles underlying metal frame. Continue to use the roller until the surface has cooled down. At this point your laminant will be 100% re-attached.


Posted in Environmentals, Main | Leave a comment