ExtortionLetterInfo Forums

ELI Forums => Getty Images Letter Forum => Topic started by: Robert Krausankas (BuddhaPi) on August 23, 2011, 06:04:22 PM

Title: Is this plausable??
Post by: Robert Krausankas (BuddhaPi) on August 23, 2011, 06:04:22 PM
this has been going thru my head since I read the original post, what are your opinions could this actually be plausable and legal? I know most of us aren't lawyers, and I may just run this by one..

Send Getty a letter stating that you DO want their bot to crawl your site as long as it conforms to certain conditions such as not indexing image directories or crawling directories that are listed as exclude in the robots.txt file of the site. However, you wish to be paid $100.00 per non-compliant visit and you will be logging the activity. This is a legal “offer” to allow them to spider your site. Their spider, upon entering your site after the offer notice may constitute acceptance of the offer. Bill them and sue them for non-payment. Let’s get EVERYONE to send this letter so that they need to manually deal with this situation and hire tens of millions of dollars in labor to comply. Do not accept any web forms to opt-out, but make them do this all by hand and bill them until they comply, but sue them anyway for not complying.


and while we're here, what are your thoughts on this. If I have specific instructions in nmy robots.txt file to block certain spiders/bots/scrapers, and picscout ignores this, doesn't this equate to essentially hacking?? as well theft of my bandwidth??.. I've already so far as to block the whole range of IP's associated with  Israel, this isn't enough for me, I'd really like to come up with a way to turn the tables, and get them where it hurts..

thoughts, feedback, IDEAS
Title: Re: Is this plausable??
Post by: photographer on August 23, 2011, 07:39:23 PM

thoughts, feedback, IDEAS

You cant stop them searching your ISPs proxies and caches though.
Title: Re: Is this plausable??
Post by: Jerry Witt (mcfilms) on August 23, 2011, 08:47:07 PM

thoughts, feedback, IDEAS

You cant stop them searching your ISPs proxies and caches though.

Who cares? The idea wouldn't be to stop them from looking. The idea would be to make it burdensome for Getty to spider a site.

If I want to use a photo that a site claims is in the public domain, I have the extra burden of making sure it is not represented by a stock agency. So it is justifiable that if I allow a stock agency to tie up bandwidth on my site and inspect it with spiders, then I am within my rights to charge for this.

If enough people did this, it would make a difference. They would certainly have to spend time and money dealing with the issue (as we all have had to do).

If there were a template letter I would at least send it. If their was documentation on spotting the right user agent I would start logging it.

However, this all hinges on this written "demand letter" -- a written warning/request. I don't think hijacking anyone in court for spidering your public-facing site while not obeying the robots.txt file would fly. In lieu of a direct request (demand) the choice to obey robots.txt seems to be up to the person developing the spider.

But if you've been told not to spider it, and you do, that is a whole different story. I doubt any of this has been tested in court yet.
Title: Re: Is this plausable??
Post by: SoylentGreen on August 24, 2011, 12:14:53 AM
These are interesting ideas.

Correct me if I'm wrong, but I think that "The Democratic Underground" had quite an impressive warning to Righthaven that they weren't legally allowed to spider the site.
It was pretty clever.  I'm trying to find that page; if anyone else knows of it, please post the link.  I wonder what became of it?

In any case, Righthaven lost to "The Democratic Underground".
Righthaven didn't own the copyrights and couldn't sue on behalf of the actual copyright owner.
Getty and Masterfile victims take note here.  Being an exclusive agent doesn't transfer copyright.

By the way, proxies and caches can be cleared out without much problem on a regular basis.

Has anyone considered keeping their corporation as usual, but having a second corporation (with no assets or property) "publish" the sites?
You could have the crap sued out of you, but you could just close the corporation that publishes the web stuff (but has no assets).
I don't know that this could work; I'm just throwing it out there.  But, this is essentially what I'm going to try to make happen.
It'll cost some fees to lawyers and also administrative fees, etc.  But for me personally, it would be worth it.
In addition, there are businesses that cater to the demand for "shell corporations", which are often simply housed in non-descript houses/buildings with hundreds of mailboxes inside and a secretary.  There's a good one in Montana, if I'm not mistaken.
By the way, before anyone says that it's illegal, it isn't.  Cheeky monkey.
I should mention that I'm not intending to do anything that would infringe on anyone, etc.  It's just "insurance".

S.G.



Title: Re: Is this plausable??
Post by: newzshooter on August 24, 2011, 02:13:47 AM
You could have a problem with this clause:

504 (3) (A) In a case of infringement, it shall be a rebuttable presumption that the infringement was committed willfully for purposes of determining relief if the violator, or a person acting in concert with the violator, knowingly provided or knowingly caused to be provided materially false contact information to a domain name registrar, domain name registry, or other domain name registration authority in registering, maintaining, or renewing a domain name used in connection with the infringement.

Title: Re: Is this plausable??
Post by: photographer on August 24, 2011, 04:44:37 AM

thoughts, feedback, IDEAS

You cant stop them searching your ISPs proxies and caches though.

Who cares? The idea wouldn't be to stop them from looking. The idea would be to make it burdensome for Getty to spider a site.


They dont need to spider your site. They can spider the proxy of whoever visits your site. Google takes caches of sites it searches and stores them. These are spiderable.
Depending on your isp setup, every time you access a site it can take mirrors of it on their proxies, gateways etc.

I dont mean the actual proxies on your own computer, the ones the ISPs, search engine, wayback machines etc use. If they find something in the ISP caches they just go to the site, take a screenshot and done.
Title: Re: Is this plausable??
Post by: Robert Krausankas (BuddhaPi) on August 24, 2011, 08:57:07 AM

Depending on your isp setup, every time you access a site it can take mirrors of it on their proxies, gateways etc.

I dont mean the actual proxies on your own computer, the ones the ISPs, search engine, wayback machines etc use. If they find something in the ISP caches they just go to the site, take a screenshot and done.

There may be a simple work-around to this...I already have archive.org and google not cacheing my pages, however you raise a good point with ISP's and proxies. I believe if the pages are served securely (https) they won't be cached..now to do some research to see if this in fact true. Mind you I have nothing to hide, but I do host well over 200 domains, so this may aid my client base, and also help prevent the theft of my bandwidth, along with installing a bot-trap on the servers..
Title: Re: Is this plausable??
Post by: SoylentGreen on August 24, 2011, 11:27:32 AM
You could have a problem with this clause:

504 (3) (A) In a case of infringement, it shall be a rebuttable presumption that the infringement was committed willfully for purposes of determining relief if the violator, or a person acting in concert with the violator, knowingly provided or knowingly caused to be provided materially false contact information to a domain name registrar, domain name registry, or other domain name registration authority in registering, maintaining, or renewing a domain name used in connection with the infringement.



You make an excellent point here, 'Newzshooter".

However, a "shell corporation" service has a secretary that handles phone calls, and mailboxes so that the corporations can get mail.
So long as a response can be made by the corporation to verify the contact info, everything is ok.
Indeed, we needn't register our web site(s) to our hosue or apartment address; often it's our registered place of business.

Again, shell corporations aren't illegal in most places, and my suggestion isn't intended to invent a scheme wherein wilful infringements could be made.

I think that the 'rebuttable' provision of the statute implies that whether or not the infringement was 'wilfil' or not would have to be argued in court; only a judge could interpret 'intent'.
In any case, the photographer or stock image company will say that the intent was wilful regardless of the circumstances.
But, by the time it could reach 'court', the web registration would be updated, and the infringing content removed.

I must say that if a site of mine was shut down for any reason, the parties involved would have to show cause for doing so, as I'd file a lawsuit to collect damages as soon as possible.
Some people have sites that generate hundreds of thousands of dollars daily; shutting a site like this down could be quite expensive if the other party doesn't have 'cause'.
Riddick/Imageline damaged Bernina's business, and now he's in quite a bit of trouble.

"GoDaddy" is afraid to get in the middle of any of this.  They're a bit too quick to pull the plug on any complaint.  So, they're one to avoid in my opinion.

Anyone who's very active on the 'web such as 'buddhapi' with many domains is at great risk for copyright trolling activities.
Most of the trolling is simply threats with no substance.

It would be great to keep the 'noise' down to a minimum.

S.G.

(http://img6.imageshack.us/img6/4113/trollbrothersmfariuscro.jpg)



Title: Re: Is this plausable??
Post by: Rainbow Queen on August 25, 2011, 08:50:49 PM
ISP proxies and Google do not cache images. Only your webserver will cache it and serve it from the memory instead of hard drive. Also your browser may cashe the image in your hard drive personal computer. You will notice that Google images point directly to the url on your webserver. Once you remove the pictures from your website, they don't show even in Google cache.

Posting IP ranges here to block PicScout is not good because they will change them once they see it advertised. You need to find it on your own and block it.

There are more companies to block than PicScout. Again, I will not post them here becauuse It is counter productive. They will change their IPs or ISPs.
Title: Re: Is this plausable??
Post by: Robert Krausankas (BuddhaPi) on August 25, 2011, 09:12:56 PM
I beg to differ. Google does indeed cache pages and images, do a simple google search and in the results are cached links, in order to not have them cache your pages you must request it, there are various other sites that also cache pages, but I won't advertise them here..
Title: Re: Is this plausable??
Post by: Rainbow Queen on August 25, 2011, 09:32:58 PM
Buddhapi, Google will cache the link to the image, but the cached link is not to an image residing on Google server. The image is on your server. Google only cached a URL to the image on your server. If a user or a spider comes from a banned IP to view the cache, they will read the HTML from the cashed webpage, but will not be able to view or download the images because they are on your server where you banned access for that IP.
Title: Re: Is this plausable??
Post by: Robert Krausankas (BuddhaPi) on August 26, 2011, 07:02:11 AM
in regards to images, yes the images on google search are cached from the server..howerever in the regular web search of google, they cache the entire site html files and all, so all Getty has to do is pulled the cached version of your site from googles servers and get a screen capture..easy enough to request google does not do this along with archive.org aka the wayback machine..There are a ton of other site that also cache our site and these caches are on their servers..

http://www.googleguide.com/cached_pages.html
Title: Re: Is this plausable??
Post by: Jerry Witt (mcfilms) on August 26, 2011, 07:44:56 PM
Again, I feel this entire discussion is sidetracking the brilliance of the original post (and original idea).

The idea isn't to PREVENT them from spidering your site (or cached versions of your site). The idea is to make it a pain in the ass to have to remove a hundred different sites from their picscout program.

If enough people sent their own "demand letter" out, the stock companies would have to have their lawyer$ evaluate the claim. Then they would likely have to have $omeone $tart omiting these domain$. And you just know that sometimes one will slip through. That can really $tart to add up.
Title: Re: Is this plausable??
Post by: Robert Krausankas (BuddhaPi) on August 26, 2011, 08:38:00 PM
Maybe Oscar will stumble upon this and offer up some insight as to whether it would hold water...if so I myself would be willing to pitch in to get a draft prepared, if not pay for it myself..as mentioned i have over 200 hosting accounts, so thats a good number of letters to be sent right out of the gate, plus anyone else here who would to take part..
Title: Re: Is this plausable??
Post by: Oscar Michelen on August 28, 2011, 12:28:18 PM
The idea has some merit, but just sending a letter without some method of proving receipt and acceptance won't be enough and of course as indicated in other posts there are legal ways the companies could get around this. We have long ago advised folks on this site to clear your Google cache and the cache on archive.org to prevent further spidering through these sources. I am not technically savvy enough to determine if there was a way to limit all discovery of what's on your site. But a letter to them demanding they stay off of the site as described originally would be a start
Title: Re: Is this plausable??
Post by: SoylentGreen on August 31, 2011, 10:55:10 AM
Here's that link with a warning to Getty., Picscout, et al.
I thought that it was from  the Democratic Underground, but it's from DC Direct Action News.

http://dcdirectactionnews.wordpress.com/legal-notice-to-getty-images-scanning-robot-picscout-is-not-authorized-to-access-this-site/

It reads in part:

"LEGAL NOTICE TO PICSCOUT , GETTY IMAGES, PICSCOUT CLIENTS: You are prohibited from accessing this site.

1: Permission for the copyright scanning robot program known as Picscout to access this site is explicity denied.
All other robots which scan content for the purpose of any form of law enforcement, , criminal or civll, are also denied permission to access this site at any time.

2: Most of the photos here were taken by our own cameras.
They are licensed for all noncommercial reproduction EXCEPT by law enforcement,by Getty Images, or by any other corporation that has at least once filed a copyright infringment lawsuit against one or more online users of their content.
Use of any original DC Direct Action News story, photo, audio, or video recording for any propose by any entity which is a plainitiff in a copyright infringement case is hereby prohibited.

3: Getty Images is explicitly prohibited from using any image that originated in a DC Direct Action News camera for any propose.
It is up to Getty to guess which ones these are! These photos are released for not-for-profit use by the general public,their use for extortion by an external party claiming copyright against a third party downloader from THEIR site shall be treated as a copyright violation.
 
4: Notice concerning demands for damages originating from Getty Images or other  scanning robot users
 
We regard Getty Images as an organized crime entity engaged in extortion.
As such, all payment SHALL BE REFUSED if any threats of legal action are ever received from Getty Images or any other copyright holder engaging in extortion by demanding “damages” prior to sending DCMA takedown notifications.
Not only will we refuse to pay the funds you demand, we shall instead seek liquidated damages in the amount of $10,000 US per violation or our Terms of Service concerning image scanning robots."

S.G.

Title: Re: Is this plausable??
Post by: Robert Krausankas (BuddhaPi) on August 31, 2011, 11:27:42 AM
ok here are the questions / comments!

Great find!! once again SG ( sans avatar) Rocks!

I wonder if we could persuade Oscar or another legal eagle to review this and suggest any changes to ensure it is legally binding?

how do we present this to GI, MF, Picscout? will presenting it on a page suffice? or would mailing it certified mail / return reciept be more effective.

Perhaps we could pool some resources ($$$) and politely ask Oscar to draft us up something. I will gladly send out certified letters on behalf of all of my clients, if I was sure this would work..
Title: Re: Is this plausable??
Post by: SoylentGreen on August 31, 2011, 02:10:18 PM
I like it a lot too, and I'm quite glad that I found my bookmark for it.
I thought that I'd lost it.

I think that to make it legally binding, it would have to be signed like a contract.
Or, it would have to correspond to a "law" that exists on the books.
The other way is to take it to court to "test it".  A court victory would make it legally binding between the site owner and Getty/Picscout for example.
Then, any other site owner could hold this victory over Getty/Picscout.
Those are just my opinions.  I don't know if there's anyone who'd take the time or risk to test it.

I'm doing some research to figure out if Picscout is violating any laws.  I have some ideas, but I need to pin it down a bit more.
The approach is to find something that they're doing that breaches a law or convention, and that they don't need prior notification of their action to be held liable for their act.
But, again it probably wouldn't be a "criminal statute" sort of thing, so even if there's a 'violation' it would still be up to the 'victim' to prosecute.
However, if there's something in existing laws that apply, then we could say that a "win" is plausible, even if it's never guaranteed.

S.G.







Title: Re: Is this plausable??
Post by: Robert Krausankas (BuddhaPi) on August 31, 2011, 02:17:00 PM
well clearly picscout does not follow robots.txt rules and is also fudging it's user agent to look like something else, to me this is boarderline hacking, is there a "law" being broken probably not, but it would be nice to find such a thing, good luck in the search