Jump to content

kralcx

Member
  • Posts

    153
  • Joined

  • Last visited

  • Days Won

    1

Posts posted by kralcx

  1. Since your new you might as well start learning Responsive Design (responsive design works on multiple viewing devices; e.g. smart phones and different size monitors). The Goldilocks Approach uses a combination of Ems, Max-Width, Media Queries and Pattern Translations to consider just three states that allow your designs to be resolution independent. There are many other grid systems to choose from; I just prefer the Goldilocks Approach the best.

  2. So, say I created an e-commerce store for somebody & they had no external online links to their site (because it is brand new), googlebot would not be able to find them?

     

    I think you meant to say internal or inbound links (pointing toward your website). Theoretically yes that could be true, however it is not practical. Many people will find brand new websites for a variety of reasons (they may be hackers, they may be directory sites, they may just like your website and want to link to it).

     

    If you want to be sure the search engines don't index your website, password protect it or at the very least use a robot.txt asking the search engines not to index your website.

  3. Sorry, to understand better, by crawling the web --> does this mean look for index.html or index.php in all root directories of all registered domain names?

    Also, what about registered sub-domain names, where do they fit in in all of this?

     

    Yes. As long as there is a link to that page (for example: index.html or index.php) googlebot (google's search engine) can find it and index that page. Sub-domains are consider separate websites by google, however they can still be crawled and indexed just like any other website.

     

    Hi Eddie,

    Thnx for the reply.

    Are you saying that crawlers, etc cannot access these folders either? [because I wasn't really sure about this]

    Thnx

     

    Yes. According to Google's own website: If you need to keep confidential content on your server, save it in a password-protected directory. Googlebot and other spiders won't be able to access the content. This is the simplest and most effective way to prevent Googlebot and other spiders from crawling and indexing content on your site.

     

    Hope that helps. :)

    • Upvote 1
  4. Google will only find things as it follows links just like us. Google cannot just look in your folders and index things. I'm 99% sure that's true.

     

    Yes that's correct. Googlebot finds pages in two ways: through an add URL form, www.google.com/addurl.html, and through finding links by crawling the web.

  5. A cookie is just one or more pieces of information stored as text strings on your machine. A Web server sends you a cookie and the browser stores it. The browser then returns the cookie to the server the next time the page is referenced.

     

    The most common use of a cookie is to store a user ID.

     

    Many websites will not let you access them if you refuse their cookies. The choice to accept or not is always yours.

     

    If you are concerned about being tracked just use a program like ccleaner and remove all cookies after you close your browser.

  6. I am no expert in this field, however it seems to me that you need some type of php to determine the ip address of the user to either use the local streaming server or the CDN. I believe a simple javascript can determine if a user is on a smart phone and redirect them accordingly. Or a more preferable option (in my opinion) is to make your website a responsive web design; that way you won't need a separate mobile version site.

     

    Hopefully some one more qualified that I can give you a more thorough answer. :)

  7.  

    border: 0;

    height: 2px;

    background-image: -webkit-linear-gradient(left, rgba(0,0,0,0), rgba(0,0,0,0.75), rgba(0,0,0,0));

    background-image: -moz-linear-gradient(left, rgba(0,0,0,0), rgba(0,0,0,0.75), rgba(0,0,0,0));

    background-image: -ms-linear-gradient(left, rgba(0,0,0,0), rgba(0,0,0,0.75), rgba(0,0,0,0));

    background-image: -o-linear-gradient(left, rgba(0,0,0,0), rgba(0,0,0,0.75), rgba(0,0,0,0));

    }

     

     

    Even though vendor-specific extensions are guaranteed not to cause conflicts (unless two vendors happen to choose the same identifier, of course), it should be recognized that these extensions may also be subject to change at the vendor’s whim, as they don’t form part of the CSS specifications, even though they often mimic the proposed behavior of existing or forthcoming CSS properties.

     

    Although these extensions can be useful at times, it’s still recommended that you avoid using them unless it’s absolutely necessary. It’s also worth noting that, as is usually the case with proprietary code, the extensions will not pass CSS validation.

     

    Having said all of the above you have some errors in your linear gradient code. Vendor specific code comes before standard code. You're also missing the vendor specific code for IE plus you have one too many sets of rgba code in your code. I rewrote it below:

     

    background-image: -webkit-linear-gradient(left, rgba(0,0,0,0), rgba(0,0,0,0.75)); /* vendor specific code for webkit browsers */

    background-image: -moz-linear-gradient(left, rgba(0,0,0,0), rgba(0,0,0,0.75)); /* mozilla */

    background-image: -o-linear-gradient(left, rgba(0,0,0,0), rgba(0,0,0,0.75)); /* opera */

    background-image: -ms-linear-gradient(left, rgba(0,0,0,0), rgba(0,0,0,0.75)); /* IE10 */

    filter: progid:DXImageTransform.Microsoft.gradient(startColorstr=#99000000, endColorstr=#9900000 /* IE 5.5-7 */

    -ms-filter: progid:DXImageTransform.Microsoft.gradient(startColorstr=#99000000, endColorstr=#99000000 /*IE8 */

    background-image: linear-gradient(left, rgba(0,0,0,0), rgba(0,0,0,0.75)); /* standard code */

×
×
  • Create New...