Jump to content

Temporary Domain


bongy0

Recommended Posts

Hi

 

Recently someone built me a Wpress site, but before it was complete, he emailed me a link which wasn't the proper domain name, just to check the site was ok while he was still working on it. So it was a kind of dummy URL that would be impossible for anyone to locate, which was the idea as the site was not completed yet.

 

What I would like to know is, how, when I am building a site, in Wpress or even Dreamweaver, how do I use a temporary URL in the same way, so when I am building it, I can send my client to this temporary url to get his feedback before I transfer the site to the correct Domain and how is that transfer done?

 

I may not be using the correct terminology here but you get my point and can help me.

 

Thanks

 

Dene

Link to comment
Share on other sites

I basically do what Eric does. I just have a sub domain called testing. Just a different name. Then to add to what Eric said what you do is tell your client to go here to see there site until it goes live. Something like http: //www.whateveryourdomainis/nameoffolderorsubdomin

Edited by grabenair
Link to comment
Share on other sites

Either way will work. Just keep in mind that you don't want the search engines to index the temp url. My suggestion is to use a robot.txt file to disallow search engines from indexing the pages.

 

Hi Eddie,

Do you know in what instances the temp url might be indexed?

Like for example if it was called index.html inside of a folder, could it potentially be indexed?

Would a password protected folder be more useful in preventing indexing by search engines?

How does the robot.txt solution work?

 

Thnx

Edited by Stephenius
Link to comment
Share on other sites

If you set up a temp folder or temp url the odds are that they will not be indexed if you don't post any related urls at places like this forum. Many developers here will provide a url with spaces in them indicating that you need to copy and paste the url and remove those spaces in order to view the page.

 

Again, robots.txt will work well in preventing pages from being listed but it's not 100% full proof since some lesser known search engines will ignore it thus may end up on other major search engines. This only happened to me once but that was probably because that I had the temp site up for such a long time (over a year) before taking it down.

 

The password method will probably be the best bet to prevent pages from being indexed by search engines. Keep in mind that you will need to provide the login information to whomever you want to view the temp site.

Edited by Eddie
Link to comment
Share on other sites

I forgot to add to my recent post the robots.txt code you can use.

 

# go away

User-agent: *

Disallow: /temp_folder/

 

You add this to the root. Change temp_folder to whatever you folder name is that you are trying to block.

 

Google robots.txt and you will find more information on this.

Link to comment
Share on other sites

Google will only find things as it follows links just like us. Google cannot just look in your folders and index things. I'm 99% sure that's true.

 

Yes that's correct. Googlebot finds pages in two ways: through an add URL form, www.google.com/addurl.html, and through finding links by crawling the web.

Link to comment
Share on other sites

The password method will probably be the best bet to prevent pages from being indexed by search engines. Keep in mind that you will need to provide the login information to whomever you want to view the temp site.

 

Hi Eddie,

Thnx for the reply.

Are you saying that crawlers, etc cannot access these folders either? [because I wasn't really sure about this]

Thnx

Edited by Stephenius
Link to comment
Share on other sites

I forgot to add to my recent post the robots.txt code you can use.

 

# go away

User-agent: *

Disallow: /temp_folder/

 

You add this to the root. Change temp_folder to whatever you folder name is that you are trying to block.

 

Google robots.txt and you will find more information on this.

 

Thanks, this seems like a prudent step.

Link to comment
Share on other sites

Sorry, to understand better, by crawling the web --> does this mean look for index.html or index.php in all root directories of all registered domain names?

Also, what about registered sub-domain names, where do they fit in in all of this?

 

Yes. As long as there is a link to that page (for example: index.html or index.php) googlebot (google's search engine) can find it and index that page. Sub-domains are consider separate websites by google, however they can still be crawled and indexed just like any other website.

 

Hi Eddie,

Thnx for the reply.

Are you saying that crawlers, etc cannot access these folders either? [because I wasn't really sure about this]

Thnx

 

Yes. According to Google's own website: If you need to keep confidential content on your server, save it in a password-protected directory. Googlebot and other spiders won't be able to access the content. This is the simplest and most effective way to prevent Googlebot and other spiders from crawling and indexing content on your site.

 

Hope that helps. :)

  • Upvote 1
Link to comment
Share on other sites

Yes. As long as there is a link to that page (for example: index.html or index.php) googlebot (google's search engine) can find it and index that page.

So, say I created an e-commerce store for somebody & they had no external online links to their site (because it is brand new), googlebot would not be able to find them?

 

Yes. According to Google's own website: If you need to keep confidential content on your server, save it in a password-protected directory. Googlebot and other spiders won't be able to access the content. This is the simplest and most effective way to prevent Googlebot and other spiders from crawling and indexing content on your site.

Hope that helps. :)

 

Great answer, cheers:clap:

Edited by Stephenius
Link to comment
Share on other sites

So, say I created an e-commerce store for somebody & they had no external online links to their site (because it is brand new), googlebot would not be able to find them?

 

I think you meant to say internal or inbound links (pointing toward your website). Theoretically yes that could be true, however it is not practical. Many people will find brand new websites for a variety of reasons (they may be hackers, they may be directory sites, they may just like your website and want to link to it).

 

If you want to be sure the search engines don't index your website, password protect it or at the very least use a robot.txt asking the search engines not to index your website.

Link to comment
Share on other sites

I think you meant to say internal or inbound links (pointing toward your website). Theoretically yes that could be true, however it is not practical. Many people will find brand new websites for a variety of reasons (they may be hackers, they may be directory sites, they may just like your website and want to link to it).

 

If you want to be sure the search engines don't index your website, password protect it or at the very least use a robot.txt asking the search engines not to index your website.

 

Okay, thanks for all your explanations.

Cheers,

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...