Jump to content

kralcx

Member
  • Posts

    153
  • Joined

  • Last visited

  • Days Won

    1

Posts posted by kralcx

  1. I am playing with the widgets and such that are available in my WordPress site. I put the "Cloud" widget in the sidebar of the blog page. I have no idea what it's function is and was going to just get rid of it but I'm sure there is a reason for this. Is there something else I can do with this? My link

     

    Thanks all,

    Annette

     

    It just shows the most commonly used tags on your blog all in one place. Not very useful if you ask me. You can most certainly get rid of it if you want too.

  2. #wrapper {

    width: 900px;

    margin:0 auto;

    background:#FFF;

    }

     

     

    #header{height: 100%;}

     

     

    #navigation ul{height:100%;}

     

    #navigation ul li{

    float:left;

    list-style-type: none;

    }

     

    #navigation ul li a{

    position:relative;

    left:145px;

    display:block;

    padding: 8px 20px;

    text-decoration: none;

    font-family: aerial;

    font-weight: bold;

    font-size: 14px;

    color: grey;

    border: 1px solid #BBBBBB;

    background: white;

    }

     

    the above code is what I changed in your CSS. It removed the space between your header and nav bar and it centered your nav bar. I hope that helps.

  3. About HBase

     

    HBase is a column-oriented database management system that runs on top of HDFS. It is well suited for sparse data sets, which are common in many big data use cases. Unlike relational database systems, HBase does not support a structured query language like SQL; in fact, HBase isn’t a relational data store at all. HBase applications are written in Java much like a typical MapReduce application. HBase does support writing applications in Avro, REST, and Thrift.

     

    An HBase system comprises a set of tables. Each table contains rows and columns, much like a traditional database. Each table must have an element defined as a Primary Key, and all access attempts to HBase tables must use this Primary Key. An HBase column represents an attribute of an object; for example, if the table is storing diagnostic logs from servers in your environment, where each row might be a log record, a typical column in such a table would be the timestamp of when the log record was written, or perhaps the server name where the record originated. In fact, HBase allows for many attributes to be grouped together into what are known as column families, such that the elements of a column family are all stored together. This is different from a row-oriented relational database, where all the columns of a given row are stored together. With HBase you must predefine the table schema and specify the column families. However, it’s very flexible in that new columns can be added to families at any time, making the schema flexible and therefore able to adapt to changing application requirements.

     

    Just as HDFS has a NameNode and slave nodes, and MapReduce has JobTracker and TaskTracker slaves, HBase is built on similar concepts. In HBase a master node manages the cluster and region servers store portions of the tables and perform the work on the data. In the same way HDFS has some enterprise concerns due to the availability of the NameNode (among other areas that can be “hardened” for true enterprise deployments by InfoSphere BigInsights), HBase is also sensitive to the loss of its master node.

     

     

     

    About Hadoop Distributed File System (HDFS)

     

    To understand how it’s possible to scale a Hadoop® cluster to hundreds (and even thousands) of nodes, you have to start with the Hadoop Distributed File System (HDFS). Data in a Hadoop cluster is broken down into smaller pieces (called blocks) and distributed throughout the cluster. In this way, the map and reduce functions can be executed on smaller subsets of your larger data sets, and this provides the scalability that is needed for big data processing.

     

    What’s the goal?

     

    The goal of Hadoop is to use commonly available servers in a very large cluster, where each server has a set of inexpensive internal disk drives. For higher performance, MapReduce tries to assign workloads to these servers where the data to be processed is stored. This is known as data locality. (It’s because of this principle that using a storage area network (SAN), or network attached storage (NAS), in a Hadoop environment is not recommended. For Hadoop deployments using a SAN or NAS, the extra network communica­tion overhead can cause performance bottlenecks, especially for larger clus­ters.) Now take a moment and think of a 1000-machine cluster, where each machine has three internal disk drives; then consider the failure rate of a cluster composed of 3000 inexpensive drives + 1000 inexpensive servers!

     

    We’re likely already on the same page here: The component mean time to failure (MTTF) you’re going to experience in a Hadoop cluster is likely anal­ogous to a zipper on your kid’s jacket: it’s going to fail (and poetically enough, zippers seem to fail only when you really need them). The cool thing about Hadoop is that the reality of the MTTF rates associated with inexpen­sive hardware is actually well understood (a design point if you will), and part of the strength of Hadoop is that it has built-in fault tolerance and fault compensation capabilities. This is the same for HDFS, in that data is divided into blocks, and copies of these blocks are stored on other servers in the Ha­doop cluster. That is, an individual file is actually stored as smaller blocks that are replicated across multiple servers in the entire cluster.

    An example of HDFS

     

    Think of a file that contains the phone numbers for everyone in the United States; the people with a last name starting with A might be stored on server 1, B on server 2, and so on. In a Hadoop world, pieces of this phonebook would be stored across the cluster, and to reconstruct the entire phonebook, your program would need the blocks from every server in the cluster. To achieve availability as components fail, HDFS replicates these smaller pieces onto two additional servers by default. (This redundancy can be increased or decreased on a per-file basis or for a whole environment; for example, a development Hadoop cluster typically doesn’t need any data re­dundancy.) This redundancy offers multiple benefits, the most obvious being higher availability.

     

    In addition, this redundancy allows the Hadoop cluster to break work up into smaller chunks and run those jobs on all the servers in the cluster for better scalability. Finally, you get the benefit of data locality, which is critical when working with large data sets. We detail these important ben­efits later in this chapter.

  4. background: #6a6a6a url(images/nav-bar-bg.png) repeat-x;

    background: -webkit-gradient(linear, left top, left bottom, from(#b9b9b9), to(#6a6a6a));

    background: -moz-linear-gradient(top, #b9b9b9, #6a6a6a);

    background: linear-gradient(-90deg, #b9b9b9, #6a6a6a);

     

    Please let me know that what is gradient and linear-gradient? Why takes different values here like

    2nd line webkit-gradient(linear, left top, left bottom,

    3rd line -moz-linear-gradient(top and

    4th line background: linear-gradient(-90deg,

     

    1st line: is a fallback, in case the browser doesn't recognize any gradients.

    2nd line: is vendor specific code for older webkit broswers

    3rd line: is vendor specific code for mozilla browswers (firefox)

    4th line: is standard code for a linear gradient

     

    I've included below additional code that should have been included in this css:

     

    background: -webkit-linear-gradient(top, #b9b9b9, #6a6a6a); this would go between lines 2 and 3; this is vendor specific code for modern webkit browsers

    background: -o-linear-gradient(top, #b9b9b9, #6a6a6a); this would go between lines 3 and 4; this is vendor specific code for opera browsers

    background: -ms-linear-gradient(top, #b9b9b9, #6a6a6a); this would follow the above line; this is vendor specific code for Internet Explorer 10

    filter: progid:DXImageTransform.Microsoft.gradient(GradientType=0, startColorstr='#b9b9b9', endColorstr='#6a6a6a'); this come after line 4; this is vendor specific code for Internet Explorer 7-9

     

    Hope this helps you.

×
×
  • Create New...