14:55 WARNING urlrewriter ebook3000biz_task URL rewriting ebook3000biz failed: No useable links found at 14:55 VERBOSE task ebook3000biz_task ACCEPTED: `Forbes Asia – April 2018` by regexp plugin because regexp '^(Forbes).*' matched field 'title' 14:55 VERBOSE task ebook3000biz_task ACCEPTED: `Design World – April 2018` by regexp plugin because regexp '^(Design World).*' matched field 'title' 14:55 VERBOSE details ebook3000biz_task Produced 10 entries. Shutdown will commence when they have completed. 14:55 VERBOSE task_queue There are 1 tasks to execute. 14:54 INFO manager Test database created Output message: 14:54 INFO manager Test mode, creating a copy from database. crawljob file or see attached files example1.crawljob and example1_, i am having trouble with discover plugin where it discovers movie called revenge from 2017 not from 2018 even though is sending it 2018 year web_server: It will be moved to the "added" folder anyways! Please keep in mind that syntax errors in your json will lead to a failure when trying to process your crawljob. The above example is attached to this article as example1_json.crawljob! "downloadFolder": "C:\\Users\\test\\Downloads", The above example is attached to this article as example1.crawljob!įormat 2: json format: You can put multiple crawljobs into one file: crawljob file and moving this file into the folder which JD is watching.įormat 1: Text format. URLs which will contain more (direct-downloadable) URLs inside HTML code.Īdd dummy-offline link if added items were processed by a crawler and this crawler has detected that the URL you were trying to crawl is offline.ĮxtractPasswords= Useful for URLs of websites which are not supported via JD plugin e.g. Should properties set via this crawljob overwrite packagizer rules?ĭeep-analyze URLs added via this crawljob? Use only if text contains a single URL only!Īuto start downloads after this item has been added to the LinkGrabber? Set this if password is required to add this URL e.g. Field nameĮnable/disable items added via this crawljob You may as well leave out fields completely instead of using the value UNSET. UNSET = Existing global setting will be used Here is an overview of all possible crawljob fields: DLC containers or normal URLs! The FolderWatch extension has to be enabled though to make this work! crawljob files to the LinkGrabber just like adding. to add URLs once with custom package name/download path and so on! crawljob files can also be used without FolderWatch to e.g. crawljob files, you can tell JD how to process the URLs which will get added whenever it processes said. If you want to use the full potential of Folder Watch, continue reading! DLC container will be moved to "folderwatch/added". DLC container will appear in your LinkGrabber and the. After some seconds, the URLs inside your. DLC file in the above mentioned "folderwatch" default folder.Ĥ. Delete the previously exported URLs in JD and move the created. DLC container via Rightclick -> Other -> Create DLCģ. Open JDownloader and export some added URLs as. To start you can do this simple test using a. Processed files will be moved to a subfolder inside the watched folder called "added", e.g. running JD on a server and want it to process all URLs added via this method without the need of any further user interaction in JD.Īfter activating the Folder Watch addon, JD is by default monitoring the folder /folderwatch every 1000ms. Posted by pspzockerscene psp, Last modified by pspzockerscene psp on 08 February 2022 03:20 PMįolder Watch is an addon which can be installed and enabled via Settings -> Extensions (scroll down) -> Folder Watchįolder Watch allows you to let JD monitor one- or multiple folders for special.
0 Comments
Leave a Reply. |