Scraping Sites for SENukeXCR with Scrapebox

This topic contains 1 reply, has 2 voices, and was last updated by  Matthew Woodward 1 year, 1 month ago.

Viewing 2 posts - 1 through 2 (of 2 total)
  • Author
    Posts
  • #21715

    DoubleD
    Participant

    Hey Matt, I am not sure if I am doing this correct, your reply would be much appreciated and would help myself and other users a lot.

    I have opened up Scrapebox, imported my list of keywords, merged them with the Article-Beach footprints:

    “index.php?page=submitarticle”
    “Articles with any spelling or grammar errors will be deleted”
    “upload your articles and keep updated about new articles.”

    (with quotes, as they are in the foot-prints.txt file you provided)

    I then also merged the merge-list.txt with the above and let it scrape over night. It returned about 50,000 urls, when I stopped harvesting in the morning. I took a look at some of the sites and I’m unsure how SENuke would be able to post / register with them, as it doesn’t appear possible. I have not yet imported this list of 50,000 urls in to the article beach sites of SENukeXCR, because I feel like I went wrong somewhere in the process.

    Is this correct? I would then do the same process but with the other footprints you’ve provided in the footprints.txt file?

    Your advice or guidance is appreciated, and thanks for all the help.

    -Dylan

    #21718

    Hi,

    You will get a lot of junk results returned and granted the site importer of SENuke sucks.

    What you could do is then run that list through http://sickmarketing.com/forum/showthread.php?1351-Download-Sick-Platform-Reader which is free to filter those out?

    Thinking on my toes here ^^

Viewing 2 posts - 1 through 2 (of 2 total)

You must be logged in to reply to this topic.

 Share On Google+ Now!