Skip to content

Latest commit

 

History

History
853 lines (640 loc) · 22.4 KB

CHANGELOG.adoc

File metadata and controls

853 lines (640 loc) · 22.4 KB

karajo changelog

env: fix missing ini tag on IsDevelopment field
all: fix permission of "/srv/karajo"

The content of directory "/srv/karajo" may contains files served to public even internal, so it should be accesible by other user or group.

all: return and show the current version in API environment and main page
all: set the module Version during build

The Version information is derived from latest tag and commit hash. This allow command "karajo version" and user interface show on which version its currently run.

make: fix file permissions when installing public index.html

Using 0640 (where user and group owner set to root) cause karajo service—​that run as karajo user—​unable to read the file.

This release mostly contains chores.

env: remove [rand.Seed] usage

The [ascii.Random] generate random using "crypto/rand", so no need to seed it anymore.

all: replace module "share" with "pakakeh.go"

The "share" module repository has been moved to SourceHut, with new name "pakakeh.go".

all: refactoring JobExec APIs to have "_exec" suffix

In JobHttp, we have "_http" suffix for its HTTP APIs. To make it consistent we changes the HTTP API path to have "_exec" suffix.

all: apply default revive suggestions

I prefer zero configuration rather that creating exclusions, like "revive.toml" file that we have earlier, even thought it will cause breaking changes to our APIs.

Breaking changes,

  • [Env.HttpJobs] become [Env.HTTPJobs]

  • [Env.HttpTimeout] become [Env.HTTPTimeout]

  • [Env.HttpJobs] become [Env.HTTPJobs]

  • and many more.

all: implement API to cancel running job

In the JobBase we add method Cancel to cancel running JobExec or JobHTTP.

In the HTTP server, we add endpoint "POST /karajo/api/job_exec/cancel" to cancel JobExec by its ID.

all: export the HTTP server field in Karajo

By exporting the HTTP server field, user of Karajo can register additional HTTP endpoints without creating new HTTP server instance.

all: always call finish even if the job is paused

This is to make the [JobBase.NextRun] always set to next interval or schedule.

_sys: set systemd unit to start after network.target

This is to fix karajo failed to start because the DNS has not working yet when initializing email notification.

all: implement notification using email

Karajo server now support sending notification when the job success or failed with inline log inside the email body.

all: change the JobHttp log to use the same as Job

Previously, the JobHttp use single file for log with limited size. The way it works is quite complex, we need to maintain one file and a buffer that able to truncate the log if its reached its max size. We also need to store the job state in separate file.

In this changes, we replace the JobHttp log mechanism to use the same as Job, where each execution will be log separately. Not only this give us less complex code, it also remove some duplicate code.

In the HTTP API we changes the path to get the JobHttp log to match with Job path.

all: fix the HTTP API to get the Job log

The loop set job to non-nil if the Job ID not found, which may return the wrong Job log.

all: changes the HTTP response code for a success JobExec run to 200

Previously, in the HTTP endpoint for running a job, we return HTTP status code 202 (Accepted) on success, but in the WUI we check using 200 (OK).

This changes fix this by return HTTP status code 200 to make it consistent with other endpoints.

www/karajo: changes the right status

For Job with interval or schedule based show the Next run counter in hours, minutes, and seconds. Other Job type (WebHook) will be display the Last time its executed.

job_log: store the log content under unexported field content

The goal is to minimize response size in API get environment. Imagine we have 10 jobs with 10 logs each, where each log may contains ~10KB in size, in total we will return 10*10*10 KB in response body.

To minimze this, return the log content only when requested through API to get Job or JobHttp Log.

karajo/app: do not auto refresh the dashboard

Adding auto refresh consume more resources (bandwidth, cpu) on both side, client and server. The job logs or information is rarely changes, so no need to auto refresh it.

cmd/karajo: set default configuration to "/etc/karajo/karajo.conf"

== karajo v0.7.0 (2023-05-10)

This release add login feature to Karajo using user name and password that are pre-defined in the user.conf.

all: remove MaxRunning and NumRunning from JobBase

A job should be only run once at a time. If we allow the same job run more than once at the same time, there would be race condition in the command or Call that need to be handled by user.

all: implement login page

The karajo status page now moved to /karajo/app/, while the old /karajo/ page is used for login.

The login page will be display only if Environment.Users is not empty, otherwise user will be redirected to app page automatically.

all: fix possible lock on API environment

Sometimes the request to /karajo/api/environment does not return any result. The only explanation is something lock the resource so we cannot lock it and it will wait forever.

all: changes on how the job queued using channel

Previously, a job run using the following flow:

  • interval/scheduler timer kicking in

  • send the job finish to finished channel

If the job triggered from HTTP request, it will run on its own goroutine.

This changes add third channel, startq, to JobBase that queue the Job. When the timer kicking in or HTTP request received in it will pushed to startq. The startq execute the job and signal the completed job using finishq.

This release add Job scheduler, Job as WebHook, loading Job and JobHttp configuration from directory, and HTTP APIs for pausing and resuming Job.

all: change the API path to execute Job

Previously, the API path to execute Job is "/karajo/job/$job_path". This may become a conflict in the future (if we want to serve any information related to job in specific page) and inconsistent API path.

This changes the API to execute job to "/karajo/api/job/run/$job_path".

all: implement job timer with Scheduler

Unlike using interval, the Scheduler option is more flexible and more humanly. For example, one can run job every day at 10:00 AM using

schedule = daily@10:00
all: implement Job auth_kind

A job can be triggered from external by sending HTTP POST request to the Job’s Path. Each request is authorized based on the AuthKind and optional Secret.

Supported AuthKind are,

  • github: the signature read from "x-hub-signature-256" and compare it by signing request body with Secret using HMAC-SHA256. If the header is empty, it will check another header "x-hub-signature" and then sign the request body with Secret using HMAC-SHA1.

  • hmac-sha256 (default): the signature read from HeaderSign and compare it by signing request body with Secret using HMAC-SHA256.

  • sourcehut: See https://man.sr.ht/api-conventions.md#webhooks

all: implement loading JobHTTP configuration from separate directory

Previously, all JobHttp configuration must be defined in single configuration, karajo.conf.

This changes make karajo configuration more manageable by loading JobHttp configuration from all files under directory $DirBase/etc/karajo/job_http.d as long as the file suffix is ".conf".

all: implement loading Job configuration from separate directory

Previously, all job configuration must be defined in single configuration, karajo.conf.

This changes make karajo configuration more manageable by loading jobs configuration from all files under directory $DirBase/etc/karajo/job.d as long as the file suffix is ".conf".

all: implement HTTP API to resume the job execution

The HTTP API have the following signature

POST /karajo/api/job/resume
Content-Type: application/x-www-form-urlencoded

_karajo_epoch=&id=

Where id is the job ID to be resumed.

all: implement HTTP API to pause a job

The HTTP API have the following signature

POST /karajo/api/job/pause
Content-Type: application/x-www-form-urlencoded

_karajo_epoch=&id=

Where id is the job ID to be paused.

all: implement interval based Hook

Previously, Hook can be triggered by sending HTTP POST request to karajo server. In most cases we create JobHttp to trigger it, so we need to define one hook and one JobHttp.

To simplify it, we add an Interval to Hook that works similar to JobHttp so now we only need to create single Hook.

all: add required files for installing in GNU/Linux system

Running make install will run commands to install required files to run karajo in GNU/Linux with systemd. The karajo service is installed but not enabled nor running automatically.

To uninstall run make uninstall.

This changes the package function in _AUR package to use make install instead of define each commands to minimize duplication.

all: generate new secret if its empty on Environment init

If user did not set the Secret in the main configuration karajo.conf, the new secret will be generated and printed to standard output on each run.

all: compress the response of the HTTP API Environment and Job log

Examining build.kilabit.info/karajo, both of those APIs return a large amount of data (> 400KB) which cause some delay when received on slow network.

This changes compress the returned body as gzip which decrease the size of output to 90% (40-60KB).

all: set default DirBase to "/"

Now that configuration and directory structure stable, we set the default DirBase to "/".

This is also to allow packaging karajo into OS package.

all: implement UI to trigger hook manually

Inside the Hook information, after list of logs, there are button "Run now" that can trigger to run the hook. The run feature require the secret to be filled and valid.

all: fix double checking for isPaused

There are two paths where Job.execute is called. One from handleHttp and one from Start. The one from handleHttp already check if job is paused before calling execute. If we check again inside execute then that means we doing it twice.

To fix this we move the check to Start method and set the Status as started before it.

_www/karajo: fix UI rendering empty hook and with status "Running …​"

When the hook is first registered, there is no logs and the status is empty.

internal: add function to convert adoc files to HTML files

The function, ConvertAdocToHtml, will be run when running embed command in karajo-build. This is to make sure that the HTML files are updated before we embed it.

_AUR: add package builder script for Arch Linux

== karajo v0.5.0 (2022-08-10)

This release add auto-refresh when viewing hook’s log, add options to customized hook header signature, and option to set maximum hook running at the same time.

all: enable auto generated index.html on public directory
hook/log: auto refresh hook log until its failed or success

When opening log for Hook in the browser, if its Status is still started keep re-fetching it every 5 seconds until its Status changes to failed or success.

all: add options to set custom header signature in Hook

The HeaderSign or header_sign in the hook configuration allow user to define the HTTP header where the signature is read. Default to "x-karajo-sign" if its empty.

all: limit hook running at the same time

In the Environment, we add field MaxHookRunning that defined maximum hook running at the same time.

This field is optional, default to 1.

While at it, clean up the logs format to make the console output more readable.

all: fix possible data race on HTTP API for fetching hook log

Since the HookLog may still writing when requested, accesing it periodically may cause a data race.

all: set environment PATH when running Hook command

Without setting the PATH, any command that use sudo will return an error "command not found".

The current PATH values is derived from default PATH after bootstraping with base-devel.

all: fix the reuse Upstream-Name and Source

Due to copy-paste, we use the ciigo as the Upstream-Name and Source.

all: split running the hook into separate goroutine

Previously, hook write the HTTP response after the Call or all of the Commands are finish. If the Hook run longer than, say 5 seconds, this may cause the request that trigger the hook return with timeout.

In this changes, once we receive the request to trigger the Hook and when the signature is valid, we return with HTTP status 200 immediately and run the Hook job in the other goroutine.

all: add timestamp to each Hook log command when executed

The goal is to know when the command is executed on the log.

all: set the Job and Hook Status before running

The Status is set to "started" so the interface can display different color.

On Job user interface, if the NextRun is less than now, it will show text "Running…​".

On Hook, set the LastRun to zero time before running, so the WUI can show status as "Running…​".

To test it, we add random sleep on Hooks in testdata.

all: store and display when the last Hook run

The Hook last running time is derived from the last log and after the Hook is finished running, either sucess or fail.

On the WUI, the last run is displayed next to the Hook name.

_www/karajo: display when the next Job will run in hours, minutes, seconds

To minimize expanding the Job, display the next Job running time right after the Job name in the following format

"Next run in ${hours}h ${minutes}m ${seconds}s"

_www/karajo: set the timer position fixed at the top

If user scroll to the bottom and open one or more Job, they can inspect the Next run with the current timer without scrolling again to the top.

_www/karajo: add function to render Hook status on refresh
_www/karajo: set the log style to pre-wrap instead of wrap

Using CSS style "white-space: wrap" with "overflow: auto" cause adding horizontal scroll bar which is not good user experience, where user need to scroll right and bottom if log is width and taller

Highlights on this release,

  • Set minimum Go version to 1.17.

  • Introduce Hook, a HTTP endpoint that execute commands; reverse of Job.

  • Refactoring Environment. Karajo now run under DirBase where all Hook and Job logs, state stored.

  • Refactoring Job configuration.

  • Improve web user interface (WUI) refresh mechanism.

  • Add authorization to Job APIs using secret and signature mechanism.

all: changes the Job configuration format to match with Hook

Previously, the job section is defined using [karajo "job"], while hook section is defined as [hook "<name>"].

The format on hook section is more friendly and short. So, to make it consistent we changes the job format to match with hook. The job section now become [job "<name>"].

_www: refactoring the job interface

Changes,

  • replace button Attributes and Logs with single click on Job name.

  • we also minimize job refresh request from two (job and log) into one: job only.

  • move the Documentation link to the bottom

  • simplify rendering job info and log into separate function

  • update the Job status on refresh

This changes affect the HTTP API for pausing and resuming the job to pass the job ID as query instead on path.

all: refactoring the Job

The Job log now stored under Environment.dirLogJob + job.ID.

The Job state is now split into separate struct jobState that contains last run time and status.

The Job state now saved under Environment.dirRunJob + job.ID instead of saving all jobs using gob in one file. The Job state is stored as text that can read and edited by human.

The Job IsPausing field is removed because its duplicate with Job Status.

all: refactoring the environment

This changes remove DirLogs and add DirBase or ini file set under karajo section with option dir_base.

The DirBase option define the base directory where configuration, job state, and log stored. This field is optional, default to current directory. The structure of directory follow the UNIX system,

$DirBase
|
|-- /etc/karajo/karajo.conf
|
+-- /var/log/karajo/job/$Job.ID
|
+-- /var/run/karajo/job/$Job.ID

Each job log stored under directory /var/log/karajo/job and the job state under directory /var/run/karajo/job.

all: add option to serve directory to public

In the Environment we add field DirPublic that define a path to serve to public.

While the WUI is served under "/karajo", a directory dir_public will be served under "/". A dir_public can contains sub directory as long as its name is not "karajo".

In the configuration file, the DirPublic is set under "karajo::dir_public" option.

all: authorize HTTP API for pausing and resuming Job

The Environment now have field Secret that contains secret to check the signature from HTTP API for pausing and resuming Job.

This require adding input field on the WUI to input the secret, generate signature, and pass it on each request for Job pause and resume.

all: implement Hook

Hook is the HTTP endpoint that run a function or list of commands upon receiving request, a reverse of what a Job.

Each Hook contains Secret for authenticating request, a working directory, and a callback or list of commands to be executed when the request received.

The circle is now complete!

all: add option to sign the Job payload using Secret

The Secret field (or "secret" option) define a string to sign the request query or body with HMAC+SHA-256. The signature is sent on HTTP header "x-karajo-sign" as hex string. This field is optional.

all: add option to set HTTP method and request type on Job

The HttpMethod field (or http_method in configuration) set the HTTP method in request. Its accept only GET, POST, PUT, or DELETE. This field is optional, default to GET if its empty.

The HttpRequestType field (or http_request_type in configuration) define the HTTP request type. Its accept only,

  • query: no header Content-Type to be set, reserved for future use;

  • form: header Content-Type set to "application/x-www-form-urlencoded";

  • json: header Content-Type set to "application/json".

The type "form" and "json" only applicable if the method is POST or PUT. This field is optional, default to query.

_www/karajo: refresh whole hooks and jobs through environment

Instead of refreshing only Jobs and its log when its opened, re-fetch the environment (that include Hooks and Jobs) and render them every 10 seconds.

all: send the current epoch on each Job execution

Each Job execution send the parameter named _karajo_epoch with value is current server Unix time.

If the request type is query then the parameter is inside the query URL. If the request type is form then the parameter is inside the body. If the request type is json then the parameter is inside the body as JSON object, for example {"_karajo_epoch":1656750073}.

all: load previous job log on start up

Upon started the Job log will be filled with the last logs. Currently, its read 2048 bytes from the end of log file.

all: add test for random hook and job result

The test-random hook will execute command:

rand=$(($RANDOM%2)) && echo $rand && exit $rand

Sometimes it will fail and sometimes it will success. This will allow us to check the user interface for multiple status on one hook or log.

all: generate ID using lib/net/html.NormalizeForID

The NormalizeForID replace white spaces non ASCII letters, digits, '-', '' with ''.

all: add documentation inside the website under /karajo/doc

The documentation is the same with README but formatted using asciidoc.

This release change the license of karajo software to GPL 3.0 or later.

See https://kilabit.info/journal/2022/gpl/ for more information.

This release update all dependencies and codes related to affected changes.

  • all: move the karajo web user interface to sub-directory karajo

    In case the user of karajo module also have embedded memfs, merging the Karajo memfs with their memfs may cause conflict (especially if the user have /index.html and /favicon.png).

  • www: make the showAttrs and showLogs to pool per 10 seconds

    Previously, the showAttrs and showLogs pool the job attributes and logs per job interval. For example, if the interval is 5 minutes, then the attributes and/or logs will be refreshed every 5 minutes.

    In order to make user can view the latest attributes and/logs immediately, we changes the interval to 10 seconds.

  • all: add prefix "http://" to address when logging at Start

The first release of karajo, programmable HTTP workers with web interface.

Features,

  • Running job on specific interval

  • Preserve the job states on restart

  • Able to pause and resume specific job

  • HTTP APIs to programmatically interact with karajo