/************************************************************************* * (C) 2011 AITNET ltd - Sofia/Bulgaria - * by Michael Pounov * * $Author: misho $ * $Id: hooks.c,v 1.1 2011/08/05 15:52:00 misho Exp $ * ************************************************************************** The ELWIX and AITNET software is distributed under the following terms: All of the documentation and software included in the ELWIX and AITNET Releases is copyrighted by ELWIX - Sofia/Bulgaria Copyright 2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011 by Michael Pounov . All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. All advertising materials mentioning features or use of this software must display the following acknowledgement: This product includes software developed by Michael Pounov ELWIX - Embedded LightWeight unIX and its contributors. 4. Neither the name of AITNET nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY AITNET AND CONTRIBUTORS ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ #include "global.h" #include "hooks.h" /* * sched_hook_init() - Default INIT hook * @root = root task * @data = optional data if !=NULL * return: <0 errors and 0 ok */ void * sched_hook_init(void *root, void *data) { sched_root_task_t *r = root; if (!r || r->root_data.iov_base || r->root_data.iov_len) return (void*) -1; r->root_data.iov_base = malloc(sizeof(struct sched_IO)); if (!r->root_data.iov_base) { LOGERR; return (void*) -1; } else { r->root_data.iov_len = sizeof(struct sched_IO); memset(r->root_data.iov_base, 0, r->root_data.iov_len); } r->root_kq = kqueue(); if (r->root_kq == -1) { LOGERR; return (void*) -1; } return NULL; } /* * sched_hook_fini() - Default FINI hook * @root = root task * @arg = unused * return: <0 errors and 0 ok */ void * sched_hook_fini(void *root, void *arg __unused) { sched_root_task_t *r = root; if (!r) return (void*) -1; if (r->root_kq > 2) { close(r->root_kq); r->root_kq = 0; } if (r->root_data.iov_base && r->root_data.iov_len) { free(r->root_data.iov_base); r->root_data.iov_base = NULL; r->root_data.iov_len = 0; } return NULL; } /* * sched_hook_cancel() - Default CANCEL hook * @task = current task * @arg = unused * return: <0 errors and 0 ok */ void * sched_hook_cancel(void *task, void *arg __unused) { struct sched_IO *io; sched_task_t *t = task; struct kevent chg[1]; struct timespec timeout; if (!t || !t->task_root || !ROOT_DATA(t->task_root) || !ROOT_DATLEN(t->task_root)) return (void*) -1; else io = ROOT_DATA(t->task_root); timespecclear(&timeout); switch (t->task_type) { case taskREAD: if (FD_ISSET(TASK_FD(t), &io->wfd)) EV_SET(&chg[0], TASK_FD(t), EVFILT_WRITE, EV_ADD, 0, 0, &TASK_FD(t)); else EV_SET(&chg[0], TASK_FD(t), EVFILT_WRITE, EV_DELETE, 0, 0, &TASK_FD(t)); kevent(t->task_root->root_kq, chg, 1, NULL, 0, &timeout); FD_CLR(TASK_FD(t), &io->rfd); break; case taskWRITE: if (FD_ISSET(TASK_FD(t), &io->rfd)) EV_SET(&chg[0], TASK_FD(t), EVFILT_READ, EV_ADD, 0, 0, &TASK_FD(t)); else EV_SET(&chg[0], TASK_FD(t), EVFILT_READ, EV_DELETE, 0, 0, &TASK_FD(t)); kevent(t->task_root->root_kq, chg, 1, NULL, 0, &timeout); FD_CLR(TASK_FD(t), &io->wfd); break; default: break; } return NULL; } /* * sched_hook_read() - Default READ hook * @task = current task * @arg = unused * return: <0 errors and 0 ok */ void * sched_hook_read(void *task, void *arg __unused) { struct sched_IO *io; sched_task_t *t = task; struct kevent chg[1]; struct timespec timeout; if (!t || !t->task_root || !ROOT_DATA(t->task_root) || !ROOT_DATLEN(t->task_root)) return (void*) -1; else io = ROOT_DATA(t->task_root); if (FD_ISSET(TASK_FD(t), &io->rfd)) return NULL; else FD_SET(TASK_FD(t), &io->rfd); timespecclear(&timeout); EV_SET(&chg[0], TASK_FD(t), EVFILT_READ, EV_ADD, 0, 0, &TASK_FD(t)); if (kevent(t->task_root->root_kq, chg, 1, NULL, 0, &timeout) == -1) { LOGERR; return (void*) -1; } return NULL; } /* * sched_hook_write() - Default WRITE hook * @task = current task * @arg = unused * return: <0 errors and 0 ok */ void * sched_hook_write(void *task, void *arg __unused) { struct sched_IO *io; sched_task_t *t = task; struct kevent chg[1]; struct timespec timeout; if (!t || !t->task_root || !ROOT_DATA(t->task_root) || !ROOT_DATLEN(t->task_root)) return (void*) -1; else io = ROOT_DATA(t->task_root); if (FD_ISSET(TASK_FD(t), &io->wfd)) return NULL; else FD_SET(TASK_FD(t), &io->wfd); timespecclear(&timeout); EV_SET(&chg[0], TASK_FD(t), EVFILT_WRITE, EV_ADD, 0, 0, &TASK_FD(t)); if (kevent(t->task_root->root_kq, chg, 1, NULL, 0, &timeout) == -1) { LOGERR; return (void*) -1; } return NULL; } /* * sched_hook_fetch() - Default FETCH hook * @root = root task * @arg = unused * return: NULL error or !=NULL fetched task */ void * sched_hook_fetch(void *root, void *arg __unused) { struct sched_IO *io; sched_root_task_t *r = root; sched_task_t *task; struct timeval now, m, mtmp; struct timespec nw, *timeout; struct kevent evt[1], res[KQ_EVENTS]; register int i; int en; if (!r || !ROOT_DATA(r) || !ROOT_DATLEN(r)) return NULL; /* get new task by queue priority */ retry: while ((task = TAILQ_FIRST(&r->root_event))) { TAILQ_REMOVE(&r->root_event, task, task_node); task->task_type = taskUNUSE; TAILQ_INSERT_TAIL(&r->root_unuse, task, task_node); return task; } while ((task = TAILQ_FIRST(&r->root_ready))) { TAILQ_REMOVE(&r->root_ready, task, task_node); task->task_type = taskUNUSE; TAILQ_INSERT_TAIL(&r->root_unuse, task, task_node); return task; } #ifdef TIMER_WITHOUT_SORT clock_gettime(CLOCK_MONOTONIC, &nw); now.tv_sec = nw.tv_sec; now.tv_usec = nw.tv_nsec / 1000; timerclear(&r->root_wait); TAILQ_FOREACH(task, &r->root_timer, task_node) { if (!timerisset(&r->root_wait)) r->root_wait = TASK_TV(task); else if (timercmp(&TASK_TV(task), &r->root_wait, -) < 0) r->root_wait = TASK_TV(task); } if (TAILQ_FIRST(&r->root_timer)) { m = r->root_wait; timersub(&m, &now, &mtmp); r->root_wait = mtmp; } else { /* set wait INFTIM */ r->root_wait.tv_sec = r->root_wait.tv_usec = -1; } #else if (!TAILQ_FIRST(&r->root_eventlo) && (task = TAILQ_FIRST(&r->root_timer))) { clock_gettime(CLOCK_MONOTONIC, &nw); now.tv_sec = nw.tv_sec; now.tv_usec = nw.tv_nsec / 1000; m = TASK_TV(task); timersub(&m, &now, &mtmp); r->root_wait = mtmp; } else { /* set wait INFTIM */ r->root_wait.tv_sec = r->root_wait.tv_usec = -1; } #endif /* if present member of eventLo, set NOWAIT */ if (TAILQ_FIRST(&r->root_eventlo)) timerclear(&r->root_wait); if (r->root_wait.tv_sec != -1 && r->root_wait.tv_usec != -1) { nw.tv_sec = r->root_wait.tv_sec; nw.tv_nsec = r->root_wait.tv_usec * 1000; timeout = &nw; } else /* wait INFTIM */ timeout = NULL; if ((en = kevent(r->root_kq, NULL, 0, res, KQ_EVENTS, timeout)) == -1) { LOGERR; goto retry; } timespecclear(&nw); /* Go and catch the cat into pipes ... */ for (i = 0; i < en; i++) { memcpy(evt, &res[i], sizeof evt); evt->flags = EV_DELETE; /* Put read/write task to ready queue */ switch (res[i].filter) { case EVFILT_READ: TAILQ_FOREACH(task, &r->root_read, task_node) { if (TASK_FD(task) != *((int*) res[i].udata)) continue; /* remove read handle */ io = ROOT_DATA(task->task_root); FD_CLR(TASK_FD(task), &io->rfd); TAILQ_REMOVE(&r->root_read, task, task_node); task->task_type = taskREADY; TAILQ_INSERT_TAIL(&r->root_ready, task, task_node); break; } break; case EVFILT_WRITE: TAILQ_FOREACH(task, &r->root_write, task_node) { if (TASK_FD(task) != *((int*) res[i].udata)) continue; /* remove write handle */ io = ROOT_DATA(task->task_root); FD_CLR(TASK_FD(task), &io->wfd); TAILQ_REMOVE(&r->root_write, task, task_node); task->task_type = taskREADY; TAILQ_INSERT_TAIL(&r->root_ready, task, task_node); break; } break; } if (kevent(r->root_kq, evt, 1, NULL, 0, &nw) == -1) LOGERR; } /* timer update */ clock_gettime(CLOCK_MONOTONIC, &nw); now.tv_sec = nw.tv_sec; now.tv_usec = nw.tv_nsec / 1000; TAILQ_FOREACH(task, &r->root_timer, task_node) if (timercmp(&now, &TASK_TV(task), -) >= 0) { TAILQ_REMOVE(&r->root_timer, task, task_node); task->task_type = taskREADY; TAILQ_INSERT_TAIL(&r->root_ready, task, task_node); } /* put eventlo priority task to ready queue, if there is no ready task or reach max missed fetch-rotate */ if ((task = TAILQ_FIRST(&r->root_eventlo))) { if (!TAILQ_FIRST(&r->root_ready) || r->root_eventlo_miss > MAX_EVENTLO_MISS) { r->root_eventlo_miss = 0; TAILQ_REMOVE(&r->root_eventlo, task, task_node); task->task_type = taskREADY; TAILQ_INSERT_TAIL(&r->root_ready, task, task_node); } else r->root_eventlo_miss++; } else r->root_eventlo_miss = 0; /* OK, lets get ready task !!! */ if (!(task = TAILQ_FIRST(&r->root_ready))) goto retry; TAILQ_REMOVE(&r->root_ready, task, task_node); task->task_type = taskUNUSE; TAILQ_INSERT_TAIL(&r->root_unuse, task, task_node); return task; }