Jump to content
Stef's Coding Community

saversites

Member
  • Content Count

    65
  • Joined

  • Last visited

Everything posted by saversites

  1. Hi, I have a form field where I prompt user for tgheir domain names. Now, I need to filter it so user inputs valid domain. How to do it ? This ain't working: $primary_website_domain = filter_var(trim($_POST["primary_website_domain"],FILTER_SANITIZE_DOMAIN)); $primary_website_domain_confirmation = filter_var(trim($_POST["primary_website_domain_confirmation"],FILTER_SANITIZE_DOMAIN));
  2. Thank you Stef. I have not tried on a live server. I guess I should do that first! Cheers!
  3. Php Experts, I need to fetch not the last row in the table but the last entry based on a condition. I only know how to fetch the last row in the table. I tried coding the way I think it is done but no luck. Googled but no luck. $query_for_today_date_and_time = "SELECT date_and_time FROM logins WHERE username = ? ORDER BY id DESC LIMIT 1"; if($stmt_for_today_date_and_time = mysqli_prepare($conn,$query_for_today_date_and_time)) { mysqli_stmt_bind_param($stmt_for_today_date_and_time,'s',$db_username); mysqli_stmt_execute($stmt_for_to
  4. The following fails to grab the user's real ip. I testing on my Xamp (localhost). The code is supposed to grab real ip even if user hiding behind proxy. Why not showing my dynamic ip ? I can see my ip on whatismyip.com but that code fails to show it. I been testing on localhost using Mini Proxy. function getUserIpAddr() { if(!empty($_SERVER['http_client_ip'])){ //IP from Shared Internet $ip = $_SERVER['HTTP_CLIENT_IP']; }elseif(!empty($_SERVER['HTTP_X_FORWARDED_FOR'])){ //IP from Proxy $ip = $_SERVER['HTTP_X_FORWARDED_FOR']; }els
  5. Hi tef,

     

    Can you kindly respond to my thread. It is an interesting topic.

     

     

  6. Folks, I need to auto submit urls one by one to my mysql db via my "Link Submission" form. The Link Submission form will belong to my future searchengine which I am currently coding with php for my php learning assignment. For simplicity's sake, let's forget my searchengine project and let's assume I have a web form on an external website and I need it filled with peoples' personal details. Say, the external website form looks like this: <form name = "login_form" method = "post" action="yourdomain.com/form.php" enctype = "multipart/form-data"> <fieldset&
  7. Php Experts, For some reason I can't get the "if(file_exists" to work. I don't want the user uploading the same file again. If he tries then should get error alert: "Error: You have already uploaded a video file to verify your ID!" On the comments, I have written in CAPITALS such as: IS THIS LINE CORRECT ? IS THIS LINE OK ? IS LINE OK ? CORRECT ? I need your attention most on those particular lines to tell me if I wrote those lines correct or not. Those are the places where I need your attention the most to tell me if I made any mistakes on those lines or not and if so then
  8. I modified the script but no luck! One year has passed! Hi, Below is a membership or account registration page script. I need to get the User to type the password twice. Final one as the confirmation. I tried both the following for the "Not Equal To" operator after inputting mismatching inputs into the password input field and the password confirmation (re-type password field) field and none of them work as I do not get the alert that the passwords don't match. != !== My troubled lines: 1st Attempt no luck: if ($password != $password_confirmation) { ec
  9. I have error reporting on on one of the included files. Not getting any error. <?php //ERROR REPORTING CODES. declare(strict_types=1); ini_set('display_errors', '1'); ini_set('display_startup_errors', '1'); error_reporting(E_ALL); mysqli_report(MYSQLI_REPORT_ERROR | MYSQLI_REPORT_STRICT); ?> Ok, I will shorten the code in this example below. The mysql queries are not working. if ($_GET["Result_SearchType"] == "Domain") { //Grabbing these: $_GET["Result_Domain"], $_GET["Result_PageType"]. $first_param = $_GET["Result_PageType"]; $second_
  10. Folks, Why are my $query_1 failing to pull data from mysql db ? I created a condition to get alert if no result is found but I do not get the alert. That means result is found. But if found, then why the following code fails to display or echo the result through html ? Trying to pull the data with these urls: http://localhost/test/links_stats.php?Result_SearchType=Domain&Result_PageType=Information20%Page&Result_Domain=gmail.com&Result_LinksPerPage=25&Result_PageNumber= http://localhost/test/links_stats.php?Result_SearchType=Page&Result_PageType=Information
  11. Php Buddies, Look at these 2 updates. They both succeed in fetching the php manual page but fail to fetch the yahoo homepage. Why is that ? The 2nd script is like the 1st one except a small change. Look at the commented-out parts in script 2 to see the difference. The added code comes after the commented-out code part. SCRIPT 1 <?php //HALF WORKING include('simple_html_dom.php'); $url = 'http://php.net/manual-lookup.php?pattern=str_get_html&scope=quickref'; // WORKS ON URL //$url = 'https://yahoo.com'; // FAILS ON URL $curl = curl_init($url); curl_setopt($curl, CUR
  12. I did a search on the php manual for str_get_html to be sure what the function does. But, I am shown no results. And so, I ask: Just what does it do ?
  13. I am told: "file_get_html is a special function from simple_html_dom library. If you open source code for simple_html_dom you will see that file_get_html() does a lot of things that your curl replacement does not. That's why you get your error." Anyway, folks, I really don't wanna be using this limited capacity file_get_html() and so let's replace it with cURL. I tried my best in giving a shot at cURL here. What-about you ? Care to show how to fix this thingY ?
  14. UPDATE: I have been given this sample code just now ... Possible solution with str_get_html: $url = 'https://www.yahoo.com'; $curl = curl_init($url); curl_setopt($curl, CURLOPT_RETURNTRANSFER, 1); curl_setopt($curl, CURLOPT_FOLLOWLOCATION, 1); curl_setopt($curl, CURLOPT_SSL_VERIFYPEER, 0); curl_setopt($curl, CURLOPT_SSL_VERIFYHOST, 0); $response_string = curl_exec($curl); $html = str_get_html($response_string); //to fetch all hyperlinks from a webpage $links = array(); foreach($html->find('a') as $a) { $links[] = $a->href; } print_r($links); echo "<br />"
  15. I just replaced: //$html = file_get_html('http://nimishprabhu.com'); with: $url = 'https://www.yahoo.com'; $curl = curl_init($url); curl_setopt($curl, CURLOPT_RETURNTRANSFER, 1); curl_setopt($curl, CURLOPT_FOLLOWLOCATION, 1); curl_setopt($curl, CURLOPT_SSL_VERIFYPEER, 0); curl_setopt($curl, CURLOPT_SSL_VERIFYHOST, 0); $html = curl_exec($curl); That is all! That should not result in that error! :eek:
  16. Php Buddies, What I am trying to do is learn to build a simple web crawler. So at first, I will feed it a url to start with. It will then fetch that page and extract all the links into a single array. Then it will fetch each of those links pages and extract all their links into a single array likewise. It will do this until it reaches it's max link deep level. Here is how I coded it: <?php include('simple_html_dom.php'); $current_link_crawling_level = 0; $link_crawling_level_max = 2 if($current_link_crawling_level == $link_crawling_level_max) { exit(); } else
  17. I would appreciate anyone's replies on my previous 2 posts since they have different codes and I want to know why they are not working.
  18. Ok. I tried another ... <?php /* Using PHP's DOM functions to fetch hyperlinks and their anchor text */ $dom = new DOMDocument; $dom->loadHTML(file_get_contents('https://stackoverflow.com/questions/50381348/extract-urls-anchor-texts-from-links-on-a-webpage-fetched-by-php-or-curl/')); // echo Links and their anchor text echo '<pre>'; echo "Link\tAnchor\n"; foreach($dom->getElementsByTagName('a') as $link) { $href = $link->getAttribute('href'); $anchor = $link->nodeValue; echo $href,"\t",$anchor,"\n"; } echo '</pre>'; ?> I get error:
  19. As you know, that code in my original post was for scraping all links found on Google homepage. Thanks for the hint. I have worked on it. But facing a little problem. The 1st foreach belongs to the original script to scrape the links from Google homepage. I now added 2 more foreach to scrape the outerhtml and innertext from each link in the hope that one of them 2 would scrape the links' anchor texts. But, I get a blank page now. Here is the code ... <?php # Use the Curl extension to query Google and get back a page of results $url = "http://forums.devshed.com/"; $ch = curl_i
  20. Php Gurus, You know of any good php web crawler freeware/gpl, open source, etc. ? Might aswell checkout the source code and learn from there (cURL, DOM, etc. stuffs). Sphider is using deprecated stuffs and so not good.
  21. Php Buds, Here's the code, using DOM for grabbing links from google: <?php # Use the Curl extension to query Google and get back a page of results $url = "http://www.google.com"; $ch = curl_init(); $timeout = 5; curl_setopt($ch, CURLOPT_URL, $url); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); curl_setopt($ch, CURLOPT_CONNECTTIMEOUT, $timeout); $html = curl_exec($ch); curl_close($ch); # Create a DOM parser object $dom = new DOMDocument(); # Parse the HTML from Google. # The @ before the method call suppresses any warnings th
  22. Php Gurus, My following code shows all the results of the "notices" tbl. That tbl has columns: id recipient_username sender_username message This code works but it does not use the PREP STMT. I need your help to convert the code so it uses PREP STMT. On this code, all the records are spreadover 10 pages. On the PREP STMT version, I need 10 records spreadover each page. So, if there is 3,000 records then all records would be spreadover 300 pages. NON-PREP STMT CODE $stmt = mysqli_prepare($conn, 'SELECT id, recipient_username, sender_username, message FROM notic
  23. I now get error that variable $url is undefined on line 72. Notice: Undefined variable: url in C:\xampp\htdocs\...... If you check line 72, it says between double quotes: $url = "http://devshed.com"; Even if I change the url to a url who's page cURL is able to fetch, I still get the same error. This does not work either, with single quotes: $url = 'http://devshed.com'; This is very very strange! If the $url variable has not been defined then how is cURL able to fetch the page who's url is on the $url variable value ? Even though page gets fetched, I sti
×
×
  • Create New...