Getting code from a different domain into JavaScript...

D

DanielEKFA

Hey hey :)

Having a lot of problems with this. I'm helping my brother out with this art
school web page. He has a big wish to have random images from the web shown
in the background, and asked me to help him out.

My idea is this: Use the CNN Top Stories RSS feed to harvest keywords, then
use a random keyword from this harvest to search Google, get links from the
result, look through random links here and get links to images, then load
these images into the art web page's background.

My first thought was to use an XmlHttpRequest object, as I've played around
with it before and knew what it can do. Trouble is that Firefox needs some
kind of script signing to allow retrieving data from a different domain,
and I don't really understand how it works. Konqueror gives no warnings,
but fails to actually load the external content, perhaps because of the
same restriction. Add to this, IE seems to work, but pops up a warning
dialog to the user that a script is trying to access external content which
is unsafe. I wouldn't click okay if I saw this kind of dialog on a
different site, so I wouldn't expect anyone to do it on this site either.

Okay, so I looked for another way, and I read about iframes, thinking this
would be perfect, especially considering the .links and .images properties
already existing on a document object (making a custom lexer/parser pretty
much superfluous). The iframes themselves work great in Konqueror, Firefox,
and IE, loading content happily, problem is (as I've understood it) that
iframes aren't document objects but elements, and therefore own neither
a .links nor an .images property. Firefox console gives property not
defined errors on both document.getElementById('myIframe').document and
document/window.frames['myIframe'].document, anyway :(

I can really do fine without either of these properties if I could simply
find a way to get the external HTML code imported and script accessible so
that I can parse it with a state machine and extract what I need.

I've thought about a PHP solution too, but haven't looked into it thoroughly
yet, as I understand that very few servers allow reading files from other
domains into a local PHP script. Or is this wrong?

I'm anxious to receive any kind of tip, so long as it's crossplatform
compatible and not based on Java or something else (Flash etc.) that would
require the visiting browser to have an add-on installed. The site should
work on Safari, IE (Mac & PC), Konqueror, and Firefox. I only have the rest
of the day to get this working (approx. 10 hours) ;)

Thanks in advance,
Daniel :)
 
A

alex bazan

En/na DanielEKFA ha escrit:
I've thought about a PHP solution too, but haven't looked into it thoroughly
yet, as I understand that very few servers allow reading files from other
domains into a local PHP script. Or is this wrong?

i think that would be the best solution, and it would avoid getting into
the security limitations of the browser.

look into the filesystem functions
(http://www.php.net/manual/en/ref.filesystem.php)... functions like
file_get_contents() can get an url as an argument. so you can open any
external url and dump its contents into a variable for parsing.

you can also use the DOMDocument classes to parse the xml from feeds.
 
D

DanielEKFA

alex said:
En/na DanielEKFA ha escrit:

i think that would be the best solution, and it would avoid getting into
the security limitations of the browser.

That's where I'm going now, too :) Gotten pretty sick of all that security
stuff, although I see why it's necessary ;)
look into the filesystem functions
(http://www.php.net/manual/en/ref.filesystem.php)... functions like
file_get_contents() can get an url as an argument. so you can open any
external url and dump its contents into a variable for parsing.

you can also use the DOMDocument classes to parse the xml from feeds.

Thanks, I was about to browse the php.net docs, to check out retrieval
methods, your tips will come in handy.

Cheers,
Daniel :)
 
D

DanielEKFA

Alan said:
Carved in mystic runes upon the very living rock, the last words of
DanielEKFA of comp.lang.javascript make plain:


You do not need to have allow_url_fopen set in your PHP installation in
order to retrieve remote content; you can simply perform HTTP operations.
There is a function in HoloLib that will allow you to do a GET.

ftp://ftp.holotech.net/hololib.zip

Very cool, thank you! As it appears, that library also sports a nifty HTML
parser, perfect! :D

Cheers,
Daniel :)
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,769
Messages
2,569,580
Members
45,054
Latest member
TrimKetoBoost

Latest Threads

Top