Click Official ELI Links
Get Help With Your Extortion Letter | ELI Phone Support | ELI Legal Representation Program
Show your support of the ELI website & ELI Forums through a PayPal Contribution. Thank you for supporting the ongoing fight and reporting of Extortion Settlement Demand Letters.

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Topics - lucia

Pages: 1 [2]
16
Hi all.
I know some of you thought I was gone.  But I'm just doing what I do-- which is not legal, or business. It's watching server logs and banning things. :)

There are automated image services you are likely to want to ban.   Many uses will be harmless, but they can be used by anyone. Anyone means-- of course-- trolls seeking images.

The first user agent to block:

WordPress.com mShots; http://support.wordpress.com/contact/

If you goolge on 'mShots' you will see it's a fun convenient sounding feature for bloggers. But there is a publicly available plugin that anyone can use to automatically take screen shots of any web page.   I think the incoming IP will be Automatic-- which is wordpress. But the screenshot is sent to "whomever".  (In reality, very few visitors run screen shots of blogs in their sidebar. So, likely all the visits by 'mShots' to my blog are what I would call "dubious".)

 The second user agent to bloc: Anything containing SUSE. For example:

User Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.2.10) Gecko/20100914 SUSE/3.6.10-0.3.1 Firefox/3.6.10

Browsers containing "Linux" are often run by servers.  Servers are very handy things to run imagescraping scripts (which generally you want to discourage at your site.) However, depending on your audience, you don't want to block everything running Linux. (Many of my blog visitors browse the web from workstations running linux.)  But SUSE/3.6 is a bitmap feature. Being optimized for bitmaps is very handy for image scrapers. For others: not so much (it makes the browser slow for humans.)  This particular UA was left by http://www.shrinktheweb.com/reqstatus?status=1&hash=ece31cfcdf89736101398c3e1fb645ad  (It was also blocked by my auto-blocking software, but not because of the user agent.)

So: I advise blocking anything with 'SUSE' or 'mshots' in the user agent.


17
Getty Images Letter Forum / Image Exchange: Volunteers to find IPs
« on: March 08, 2012, 07:18:31 PM »
I've set up a page that is useful for detecting the full range of IPs used by Picscouts Image Exchange. I'd like to invite people to
a) Install the Image Exchange add one to their browser http://www.picscout.com/
b) visit my page.

I don't want to post the url in public. So please PM me for the uri.  After you are done you can uninstall the Image Exchange add on. 

This will help us block some picscout IPs.

18
Getty Images Letter Forum / New image bot Pixray
« on: March 02, 2012, 01:42:21 PM »
New image bot
Pixray-Seeker/1.1 (Pixray-Seeker; http://www.pixray.com/pixraybot; crawler@pixray.com)
IP: http://whois.domaintools.com/176.9.31.202

node-176-9-31-202.cluster.eu.webcrawler.pixray.com
I'm going to block "Pixray-Seeker"  by user agent the this entire webcrawler in various ways. :)

19
Getty Images Letter Forum / New Picscout IP range to block.
« on: February 13, 2012, 03:38:26 PM »
I discussed blocking image bots at my blog. DukeC a commenter asked me if my script blocked the picscout add-on which they are marketing as a way for people to identify images for sale.  Of course, the add-on also detects  images and reports back to picscout.   That visitor installed the add on, and hit a post containing images telling me his IP and user agent.  Four seconds after he hit with his IP and with the addon installed  short order, my logs showed a visit from IP 72.26.211.129. My script autobanned IP 72.26.211.129 which I suspected was an IP reporting back to picscout.

I then repeated this at a domain I control that does not run my protective script and so was not blocking IP 72.26.211.129. In my experiment, I saw a hit from 72.26.211.130 4 seconds after I loaded an image with the picscout add on installed.  I am pretty convinced these are reporting back to picscout.

I recommend those who want to prevent visitors from reporting back to picscout to
a) block the IP range 72.26.192.0 - 72.26.223.255 and
b) periodically use the image ad on to load an innocent image at their blog so as to monitor hit that follows.  Repeat this a few times to learn the current range of the picscout IPs used by the image addon.

For now, block
United States New York Voxel Dot Net Inc  IP range
NetRange:       72.26.192.0 - 72.26.223.255

I'm going to cloudflare to block this right now. :)

20
Getty Images Letter Forum / Getty Escalation Letter
« on: February 02, 2012, 10:43:49 PM »
I thought some of you might be interested in the escalation letter I received. I'm also adding my poorly proof-read response and the response I received from Sam Brown.

http://rankexploits.com/musings/wp-content/uploads/2012/02/GettyLetter2.jpg

Quote

From:    lucia@xxxxx.com
   Subject:    Re: Letter 3: Case #:  1144028 No infringement. CRM:09515827
   Date:    January 30, 2012 2:30:33 PM CST
   To:    LicenseCompliance@gettyimages.com, sam.brown@gettyimages.com

To Mr. Brown and the Getty Compliance Team,

Today I received a letter dated Jan 27, 2012 discussing the case you have assigned Getty number Case #:  1144028.  Based on the wording of this letter, it seems to me your compliance team is unaware of on going communications between myself and a Mr. Sam Brown Copyright Compliance Specialist.

As I communicated to Mr. Brown: There has been no violation of copyright on my part.

Before I reiterate the previous discussions, I would like to be sure that those on the Getty side of the conversation have read the previous communications.   I request that personnel in the Getty Compliance Team obtain a copy of my previous correspondence with the Getty Compliance Team and Mr.  Mr Sam Brown. My first email to your groups was dated November 29, 2011, Sam Brown's response dated December 19, 2011 and my reply to Mr. Brown sent December 20, 2011.

If my reply on December 20, 2011 has gone astray, I will be happy to resend that email both to Mr. Brown and to other members of your License  Compliance Team.   

After member of your team have had the opportunity to read the correspondence and become aware of the facts of the case, I will be happy to continue further discussion. In addition to wishing Getty employees to be aware of the facts of the case before I spend time discussing matters on the phone or email, I remain eager for Getty personnel to provide information I requested of Mr. Brown in my second email. Your firm drawing together the information I requested will greatly reduce the amount of time both your firm and I will need to waste on this matter.


Sincerely,
Lucia Liljegren

Note: My 2nd email was discussed at this form in this post:
http://www.extortionletterinfo.com/forum/getty-images-letter-forum/images-from-rss-getty-images-letter/msg4634/#msg4634

Here's the response I received less than 10 minutes later.
Quote
From:    sam.brown@gettyimages.com
   Subject:    RE: Letter 3: Case #:  1144028 No infringement. CRM:09515827
   Date:    January 30, 2012 2:39:07 PM CST
   To:    lucia@sssssss.com

Lucia,
 
It would appear our recently-received letter was sent in error as your December 20, 2011 e-mail (received) is still under review in our department. I apologize for any confusion our recent mailing may have caused.
Regards,
SAM BROWN
Copyright Compliance Specialist
Getty Images License Compliance
sam.brown@gettyimages.com
www.stockphotorights.com
Copyright 101
 

Getty Images Headquarters
605 5th Avenue South, Suite 400
Seattle, WA 98104 USA
Phone: 206.925.6714 (direct)
Toll Free: 1-800-462-4379 ext. 6714
Fax: 206.925.5001

©2012 Getty Images, Inc.
PRIVILEGED AND CONFIDENTIAL
This message may contain privileged or confidential information and is intended only for the individual named. If you are not the named addressee or an employee or agent responsible for delivering this message to the intended recipient you should not disseminate, distribute or copy this e-mail or any attachments hereto. Please notify the sender immediately by e-mail if you have received this e-mail by mistake and delete this e-mail and any attachments from your system without copying or disclosing the contents. E-mail transmission cannot be guaranteed to be secure or error-free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the contents of this message, which arise as a result of e-mail transmission. If verification is required please request a hard-copy version. Getty Images, 605 5th Ave South, Suite 400, USA, www.gettyimages.com. PLEASE NOTE that all incoming e-mails will be automatically scanned by us and by an external service provider to eliminate unsolicited promotional e-mails ("spam"). This could result in deletion of a legitimate e-mail before it is read by its intended recipient at our firm. Please tell us if you have concerns about this automatic filtering


 

21
Getty Images Letter Forum / RSS Demand: Perfect 10 & Google.
« on: January 12, 2012, 05:50:49 PM »
In his first post at ELI, mikedrag, and even newer newbie than I am wrote
Quote
I have a site where I legally (with permission) take RSS news feed from another site and displays it. Images from RSS feed are pooled from source site (never copied to my servers). I have recently received Getty Images letter demanding settlement for 3 images they found on my site.
RSS news feed is updated automatically. I don't have any way to check every single news article.(it would be worthless as time consuming)
Do I break any copyright rules doing it this way?
On the other hand at the time when Getty images sent the letter my site was shot down already for a month due to other technical issues.
Do they really have a case against me?

Two of use responded, citing http://caselaw.findlaw.com/us-9th-circuit/1327768.html
(For details see: http://www.extortionletterinfo.com/forum/getty-images-letter-forum/images-from-rss-getty-images-letter/)

It occurred to me that I might want to explain why if Getty drags mikedrag to court, with only a little publicity, Google might very well turn up with an amicus brief. 

Let's look at who other than mikedrag "displays" images picked up from RSS feeds. See this:

If you examine the address bar, you can see that's a snapshot of the page that is displayed by Google.  Does Google own the copyright to the image of that cute kitty? Nope.  Does Google host that entirely full sized image of the adorable sleeping cat on its server? Nope.

Google is doing precisely what mikedrag does.

They are running the feed from a blog that publishes an RSS feed.  The person who publishes that blog and also the feed  could refuse to publish that feed. That person could publish a partial feed. They could block display of images.  Do you know how I know for a fact google is hotlinking -- not hosting on their server-- and that the person running that blog could block display of the images. Here's why:


Because I just blocked them. 

Now I need to see what havoc I wrecked. I happen to be blocking at the cloud. I could equally well block using .htaccess. But everyone can always block images. It just so happens I'd rather let people see these at google, so I'm going to go undo that.  (Or try to undo it while still banning certain particular people from viewing images. :) )

22
Legal Controversies Forum / Attorneys fees in copyright: Mattel Bratz
« on: December 28, 2011, 01:05:14 PM »
I think some who get Getty letters will be interested in the judges reasoning for awarding Bratz attorney fees after they successfully defended Mattel's copyright infringement suit:
http://scholar.google.com/scholar_case?case=11025248954066929583

(I find this interesting because my getty images letter alleges infringement in a case where I never hosted their image.  )



23
Getty Images Letter Forum / bzq-109-66-7-15.red.bezeqint.net Hammering site
« on: December 20, 2011, 04:33:49 PM »
As some of you are aware, bezeqint.net is rumored to be the site from which picscout operates. Today, my site went down owing to high traffic resulting in excess memory usages. When I investigated my logs, the site has been hammered by bezequint.  The user agent is Java/1.6.0_25-- not picscout. 

We had a discussion previously about companies trying to gain access to sites by spoofing useragents.  Did the legal eagles every figure out the legal issues involved? Because this is a lot of hits from bezequint.

24
Getty Images Letter Forum / TinEye.com
« on: December 19, 2011, 06:40:49 PM »
It could be worth excluding TinEye
http://www.tineye.com/

I denied "tineye.com" in .htaccess-- but I'm not surethat's useful.

The robot is evidently:

User-agent: TinEye
Disallow: /

Some might want to watch for IPs as it might  not use the IP range for the company domain.

25
Getty Images Letter Forum / Bot trap for image browsing
« on: December 02, 2011, 02:03:17 PM »
Hi all,

I got an extortion letter early this week.  Oddly, I'd been working on banning bots all last month but I hadn't been worried about images. My issue was cpu and memory, which bots were sucking like crazy.  Images are static, so don't cause that problem. Needless to say, I now see the need to try to trap bots that are racing through images.   I've been reading various concerns  and had some of my own. These here including:

1) Not knowing IPs of things like picScout's current for certain.
2) PicScout (or others in future) changing IPs.
3) Masking of user agents

etc.

So, I want to come up with a way that a blogger, web site or forum host can at identify bots as they crawl and block them. No way is going to be 100% effective, but this morning I've been ginning up an idea based on the bot crawler here:

http://danielwebb.us/software/bot-trap/

That 'bot crawler will not work for catching some programmed image crawling bots I've seen crawling my blog because at least some programmed image crawling bots aren't going to hit a php file on purpose.  They are programmed to just crawl through images leaving php files alone. The also don't make mistakes. (I know how to catch bots that make mistakes on a wordpress blog and would know how to do it here at the forum. More on that later.)

My idea for catching what I might name "pure image browzing bots" is to do this:

1) add directory specific .htaccess files to directories I wish to prevent browzing by image bots. (These would b at least in my image directories. I could put them higher up-- but I need to be sure I know how to avoid screwing up a complicated .htaccess file in that case.  Anyway, I really only want to block these guys from images.)
2)  add an image or multiple images that I *never* link on purpose to my site. These can be 1 pixel colored images or anything.  For now, call that image 'honeyPotImage.jpg'
3) in a top level htaccess, send any bot trying to 'honeyPotImage.jpg' those specific images to a bot-trap written in php.  This bot-trap is somewhat similar to the one above.
4) Add the IP of all bots sent to the trap to the appropriate htaccess files. 
5) After (4) the bots (or whoever gets trapped) will no longer be able to load images in the protected directories even when they load text. Note: because they can load text, human visitors to my blog will be able to tell me that images vanished. This will let me unban them-- taking care to do this in a way that I think will still protect me from bots.

FWIW: I'll be adding some whitelisted hosts to the tool. My first draft has google and bingbot white listed.

I'm going to get this working for my blogs. I was wondering if others would be interested in using it once it's working? If yes, I might ask you questions to figure out how to make this user friendly. Also, if people do use it, at some point, we may want to share lists of user agents and IPs we are seeing racing through images. 

This sharing could be automated and  would help us identify any changes in IP ranges or host addresses and help people at sites 2-N ban the creepy bots as soon as they are detected at site 1.

FWIW: Lots of people at web host forums are complaining about these bots for reasons other than concerns about getting a Getty letter. The bots just race through, suck bandwidth, clutter up server logs and are just a plain old nuisance. Because of the latter, if the system is made convenient, we might be able to get lots of people using it. But first I think I just need to  know if anyone would like to volunteer to try it in a week or two after I have it working. Actually, probably by Wed.

26
Getty Images Letter Forum / Re: ELI Website Traffic Statistics Trivia
« on: December 01, 2011, 10:31:45 AM »
Daniel--
Zbblock http://www.spambotsecurity.com/zbblock.php will help block robo-spammers registering.  It's pretty easy to use but I know some people are uncomfortable editing php or ftping to their hosts and I don't know you well enough to know where you sit on the continuum of dealing with software.  If you need help ask me. (I know I'm new here, but I can still help on that.)

 I recently installed zbblock at my blog precisely because the load from crawlers has been exploding. Oddly, in Oct/Nov I spent time trying to figure out how to auto-detect and bounce lots of these guys and created a side-blog containing about 3 posts discussing a few things. I was planning to discuss that more.

I think we might also need to figure out some way to help people bounce any nuisance bots that draw our sites bandwidth for their purposes. This includes things like picscout; web rumors suggest getty uses that for crawling.

Pages: 1 [2]
Official ELI Help Options
Get Help With Your Extortion Letter | ELI Phone Support Call | ELI Defense Letter Program
Show your support of the ELI website & ELI Forums through a PayPal Contribution. Thank you for supporting the ongoing fight and reporting of Extortion Settlement Demand Letters.