How Evergreen works¶
Application version and download links are only pulled from official sources (vendor web site, GitHub, SourceForge etc.) and never a third party.
Evergreen uses an approach that returns at least the version number and download URI for applications programmatically - thus for each run an Evergreen function it should return the latest version and download link.
Evergreen uses several strategies to return the latest version of software:
- Application update APIs - by using the same approach as the application itself, Evergreen can consistently return the latest version number and download URI - e.g. Microsoft Edge, Mozilla Firefox or Microsoft OneDrive. Fiddler can often be used to find where an application queries for updates
- Repository APIs - repo hosts including GitHub and SourceForge have APIs that can be queried to return application version and download links - e.g. Atom, Notepad++ or WinMerge
- Web page queries - often a vendor download pages will include a query that returns JSON when listing versions and download links - this avoids page scraping. Evergreen can mimic this approach to return application download URLs; however, this approach is likely to fail if the vendor changes how their pages work - e.g., Adobe Acrobat Reader DC
- Static URLs - some vendors provide static or evergreen URLs to their application installers. These URLs often provide additional information in the URL that can be used to determine the application version and can be resolved to the actual target URL - e.g., Microsoft FSLogix Apps or Zoom
What Evergreen Doesn't Do¶
Evergreen does not scape HTML - scraping web pages to parse text and determine version strings and download URLs can be problematic when text in the page changes or the page is out of date. Pull requests that use web page scraping will be closed.
While the use of RegEx to determine application properties (particularly version numbers) is used for some applications, this approach is not preferred, if possible.
For additional applications where the only recourse it to use web page scraping, see the Nevergreen project.