Response
stringlengths 15
2k
| Instruction
stringlengths 37
2k
| Prompt
stringlengths 14
160
|
|---|---|---|
4
+100
in chrome you can use FileSystem API
http://www.noupe.com/design/html5-filesystem-api-create-files-store-locally-using-javascript-webkit.html this allows you to then save and read files from a sand-boxed file-system though the browser.
As for other support it's not been confirmed as an addition to the HTML5 specification set yet. so it's only available in chrome.
You could also use the IndexDB system this is supported in all modern browsers.
you can use both these services inside a Service Worker to manage the loading and manage of the content however i have to question why would you want to you prevent your self from ever updating your index.html
Share
Improve this answer
Follow
edited Feb 15, 2017 at 20:48
answered Feb 9, 2017 at 0:56
Barkermn01Barkermn01
6,8193535 silver badges8484 bronze badges
2
1
Note that the Filesystem API (formal title: File API: Directories and System) is obsolete and no other browsers have any plans to ever implement support for it. The specification itself has been demoted to a W3C “Note” with a warning in bold: Work on this document has been discontinued and it should not be referenced or used as a basis for implementation. So while it’s fine to use for the case where somebody might only care about having something that works just in Chrome, they should not expect to see it working in any other browsers, ever.
– sideshowbarker
♦
Feb 12, 2017 at 3:04
BTW this is why we don't trust W3C on HTML5 they standardised something that should not have been standardised updated on the 11th of January 2017, wicg.github.io/entries-api " Other browsers (at this time: Microsoft Edge and Mozilla Firefox) are starting to support subsets of Chrome’s APIs and behavior."
– Barkermn01
Feb 22, 2017 at 8:27
Add a comment
|
|
I am designing a JavaScript secure loader. The loader is inlined in the index.html. The goal of the secure loader is to only load JavaScript resources are trusted. The contents of index.html are mostly limited to the secure loader. For security purposes, I want index.html (as stored in cache) to never change, even if my website is hacked.
How can I cache index.html without the server being able to tamper with the cache? I am wondering if ServiceWorkers can help. Effectively, the index.html would register a service worker for fetching itself from an immutable cache (no network request is even made).
|
Permanent browser cache using ServiceWorker
|
the resolution was to install nividia plugins on the cluster so that the cluster will identify the gpu nodes
|
I am not able to create a nodegroup with GPU type using EKS, getting this error from cloud formation:
[!] retryable error (Throttling: Rate exceeded status code: 400, request id: 1e091568-812c-45a5-860b-d0d028513d28) from cloudformation/DescribeStacks - will retry after delay of 988.442104msThis is my clusterconfig.yamlapiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: CLUSTER_NAME
region: AWS_REGION
nodeGroups:
- name: NODE_GROUP_NAME_GPU
ami: auto
minSize: MIN_SIZE
maxSize: MAX_SIZE
instancesDistribution:
instanceTypes: ["g4dn.xlarge", "g4dn.2xlarge"]
onDemandBaseCapacity: 0
onDemandPercentageAboveBaseCapacity: 0
spotInstancePools: 1
privateNetworking: true
securityGroups:
withShared: true
withLocal: true
attachIDs: [SECURITY_GROUPS]
iam:
instanceProfileARN: IAM_PROFILE_ARN
instanceRoleARN: IAM_ROLE_ARN
ssh:
allow: true
publicKeyPath: '----'
tags:
k8s.io/cluster-autoscaler/node-template/taint/dedicated: nvidia.com/gpu=true
k8s.io/cluster-autoscaler/node-template/label/nvidia.com/gpu: 'true'
k8s.io/cluster-autoscaler/enabled: 'true'
labels:
lifecycle: Ec2Spot
nvidia.com/gpu: 'true'
k8s.amazonaws.com/accelerator: nvidia-tesla
taints:
nvidia.com/gpu: "true:NoSchedule"
|
GPU nodegroup in EKS
|
You should use something calledintegration. Here you can see theGitHub Integrations Directory.My favorite isTravis CI–you set it up using a.travis.ymlfile and thenafterthe commits are pushed the tests are run and Travis sends the status response which will be visible to in the Pull request.However, this can't stop the user to submit the pull request.Like I mentioned, you cannot stop the user to open pull requests but you can tell him/her the steps how to contribute using theCONTRIBUTING.mdin your project. Then when somebody opens a pull request or issue will see this alert:
|
Basically whenever somebody raises a PR on my repository, I want to ensure that the person raising the PR has performed some actions (running a script etc.)So is there a way to set up some rule or some alert so as to remind the person to perform that action before raising the PR.
|
Setup rules/alerts before raising a PR in Github
|
I have experienced a very similar issue.Be ensured that module headers is enabled1 - To enable mod headers on Apache2 (httpd) you need to run this command:sudo a2enmod headersThen restart Apachesudo service apache2 restart2 - To allow Access-Control-Allow-Origin (CORS) authorization for specific origin domains for all files, add this in your .htaccess<IfModule mod_headers.c>
Header set Access-Control-Allow-Origin https://example.org
Header set Access-Control-Allow-Origin https://example.com
Header set Access-Control-Allow-Origin https://example.eu
## SECURITY WARNING : never add following line when site is in production
## Header set Access-Control-Allow-Origin "*"
</IfModule>2 - To allow Access-Control-Allow-Origin (CORS) authorization for specific origin domains andfor fonts onlyin our example, use FilesMatch like in the following section in your .htaccess<FilesMatch "\.(ttf|otf|eot|woff|woff2)$">
<IfModule mod_headers.c>
Header set Access-Control-Allow-Origin https://example.org
Header set Access-Control-Allow-Origin https://example.com
Header set Access-Control-Allow-Origin https://example.eu
</IfModule>
</FilesMatch>After making changes in .htaccess file, no need to restart your apache webserver
|
We have been having the problem where we get errors of the format.Font from origin 'https://example.com' has been blocked from loading by
Cross-Origin Resource Sharing policy: No 'Access-Control-Allow-Origin'
header is present on the requested resource. Origin
'https://www.example.com' is therefore not allowed access.We also get a "Redirect at origin" error.We are using Drupal 7 and Cloudflare.we have attempted to edit .htaccess to includeHeader set Access-Control-Allow-Origin "https://example.com"
Header set Access-Control-Allow-Origin "https://www.example.com"Tried quite a lot;have purged cloudflarerestarted apachetried wildcard "*"Drupal CORS moduleSo far no joy.As this approach is not working, I am wondering if something is being missed or if there is an alternate approach, such as why we are getting origin 'https://example.com' being in the request via Drupal and not 'https://www.example.com'.Last note it that when I review some resources I see two distinct patterns.
If a resource has status of "301 Moved Permanently" in the request headers there isHost www.example.comRefererhttps://example.com/Where the status is "304 Not Modified"Host example.comRefererhttps://example.com/It's odd that there is any www at all; htaccess should be redirecting and it is absent from base_url.
|
CORS Access-Control-Allow-Origin Error on Drupal 7 with Cloudflare
|
2
I can not reproduce. The memory perhaps increases 200 megabytes, and that includes the ghci runtime itself.
We can work with a strict version of scanl however to improve memory usage: scanl' :: (b -> a -> b) -> b -> [a] -> [b] which will force evaluating the list items as scanl enumerates over it:
import Data.List(scanl')
apply :: [Int] -> [Int]
apply [] = []
apply (d:ds) = scanl' (\a x -> (a + x) `mod` 10) d ds
Share
Follow
answered Jul 29, 2023 at 7:50
willeM_ Van OnsemwilleM_ Van Onsem
458k3232 gold badges450450 silver badges582582 bronze badges
1
Thanks, scanl' worked. I don't really understand why is that difference, because I used strict evaluation, and I understand that scan1 did not, but a single run of the apply function run smoothly even with larger amount of data. Anyways, I have now a solution :)
– FERcsI
Jul 30, 2023 at 5:46
Add a comment
|
|
I would like to do the following:
[2,4,3,7,9,3]
[2,6,9,6,5,8]
[2,8,7,3,8,6]
[2,0,7,0,8,4]
I.e. in each phase I would like to modularly (10 in this case) sum up all the values from the beginning of the list to the current position, creating a new list: [2, (2+4) `mod` 10, (2+4+3) `mod` 10, (2+4+3+7) `mod` 10...], and I would like to call this multiple times.
My code is:
applyMulti :: Int -> [Int] -> [Int]
applyMulti 0 dat = dat
applyMulti n dat = applyMulti (n - 1) $ apply dat
apply :: [Int] -> [Int]
apply dat = scanl1 (\a x -> (a + x) `mod` 10) dat
That works fine with small of data and small repetition count. But with a list of 500k Ints running 100 times, it eats up the memory and the program is killed. I don't understand, why. Even 500k * 100 should not be huge, but that could even be optimized by the compiler.
I also tried a non-tail-recursive version:
applyMulti n dat = apply . applyMulti (n - 1) $ dat
and I even used:
{-# LANGUAGE BangPatterns, StrictData, Strict #-}
What is my mistake and how should I fix this issue?
|
Haskell 'scanl with recursion' memory issue
|
3
You must change vendor/ to /vendor/ so git will ignore only the root vendor folder.
Share
Improve this answer
Follow
answered Oct 27, 2022 at 7:33
Antonio PetriccaAntonio Petricca
9,77855 gold badges3939 silver badges7777 bronze badges
Add a comment
|
|
This question already has an answer here:
Git .gitignore to ignore only folder in root directory
(1 answer)
Closed 1 year ago.
I added my composer vendor folder in gitignore using the lines below:
application/cache/*
public/uploads/*
application/config/credentials.php
application/logs/*
vendor/
.env
.DS_STORE
temp/
composer.lock
Thus it is excluding the vendor folder in the root of my application (same directory as .gitignore file)
However, it is also ignoring all files located at /public/resources/member/js/vendor.
How do I ensure that .gitignore only ignores the vendor folder at the root of the application ?
|
.gitignore is ignoring all files in subdirectories [duplicate]
|
1
How much RAM does the computer have?
Try to change/set, also using the -Xms256M -Xmx1024M values you mentioned, the NewSize, MaxNewSize, PermSize, MaxPermSize, etc. VM values, like, f.i.: -XX:NewSize=64m -XX:MaxNewSize=128m -XX:PermSize=64m -XX:MaxPermSize=128m
Try different values...
HIH
KL
Share
Improve this answer
Follow
answered Nov 1, 2013 at 13:27
Kili LiamKili Liam
911 bronze badge
Add a comment
|
|
Everytime I try to export my project with ProGuard obfuscation, it shows "java.lang.OutOfMemoryError: Java heap space".
It won't show the error if I export with "-dontobfuscate" parameter, but this makes my use of ProGuard useless.
I tried to use -Xms256M -Xmx1024M(also 1536 and 2048) at different places, but it won't work. Wierd thing is when I look at the Task Manager, it stopped at ~256MB. So I think I might used the parameters at the wrong places.
Please help, thank you. (Sorry for bad English)
|
"Out Of Memory" when trying to export apk with ProGuard obfuscation
|
The precise details of how std::vector is implemented will vary from compiler to compiler, but more than likely, a std::vector contains a size_t member that stores the length and a pointer to the storage. It allocates this storage using whatever allocator you specify in the template, but the default is to use new, which allocates them off the heap. You probably know this, but typically the heap is the area of RAM below the stack in memory, which grows from the bottom up as the stack grows from the top down, and which the runtime manages by tracking which blocks of it are free.
The storage managed by a std::vectoris a contiguous array of objects, so a vector of twenty vectors of T would contain at least a size_t storing the value 20, and a pointer to an array of twenty structures each containing a length and a pointer. Each of those pointers would point to an array of T, stored contiguously in memory.
If you instead create a rectangular two-dimensional array, such as T table[ROWS][COLUMNS], or a std::array< std::array<T, COLUMNS>, ROWS >, you will instead get a single continuous block of T elements stored in row-major order, that is: all the elements of row 0, followed by all the elements of row 1, and so on.
If you know the dimensions of the matrix in advance, the rectangular array will be more efficient because you’ll only need to allocate one block of memory. This is faster because you’ll only need to call the allocator and the destructor one time, instead of once per row, and also because it will be in one place, not split up over many different locations, and therefore the single block is more likely to be in the processor’s cache.
|
How is a 2D area layout in memory? Especially if its a staggered area. Given, to my understanding, that memory is contiguous going from Max down to 0, does the computer allocate each area in the area one after the other? If so, should one of the areas in the area need to be resized, does it shift all the other areas down as to make space for the newly sized area?
If specifics are needed:
C++17/14/11
Clang
linux x86
Revision: (thanks user4581301)
I'm referring to having a vector<vector<T>> where T is some defined type. I'm not talking template programming here unless that doesn't change anything.
|
Memory layout of 2D area
|
Hasura suggests two ways to deploy and run Cron jobs.Cron microserviceHasura already has a microservice to run Cron jobs.If you already have a Hasura project run:hasura microservice create mycron --template=python-cronChangemycronto whatever you want to name your microservice. This will create a custom Python microservice designed to run Cron jobs. (Follow further instructions as prompted byhasuraCLI)To deploy this on Hasura, git commit and push to your cluster's remote.$ git add .
$ git commit -m "Add cron job"
$ git push hasura masterTo know more about how to customize this microservice, you can read thedocs.Kubernetes Cron jobsSince, Hasura runs on Kubernetes and Kubernetes (>= v1.8) already providesCron Jobsas a first class resource, it is recommended to use Kubernetes Cron jobs wherever possible.If you havekubectlinstalled you can check your Kubernetes version by running:kubectl version. In the output, the "server version" shows the version of the Kubernetes cluster. If you are running Kubernetes >= v1.8, we recommend you to use Kubernetes Cron jobs.When using Kubernetes Cron jobs, you can version control your cron job specs inside your Hasura project, and use thekubectltool to create and manage them.
|
How can I create, deploy and run and manageCron jobson Hasura?
|
How to create cron jobs on Hasura?
|
In OpenLayers 3, you can configure a tile layer source with a custom tileLoadFunction to implement your own storage solution:new WhateverTileSource({
tileLoadFunction: function(imageTile, src) {
var imgElement = imageTile.getImage();
// check if image data for src is stored in your cache
if (inCache) {
imgElement.src = imgDataUriFromCache;
} else {
imgElement.onload = function() {
// store image data in cache if you want to
}
imgElement.src = src;
}
}
});
|
I'm using OpenLayers 3 and all the offline examples I've seen only include localStorage for saving and retrieving map tiles. The problem is that localStorage is limited to about 5 megabytes, which is too small for my application.If I were using Leaflet instead, I could extend L.TileLayer by writing my own custom storage solution in the getTileUrl function.Is there something appropriate like that in OpenLayers 3? I'd really like to use IndexedDb or even WebSQL over localStorage.
|
Can OpenLayers 3 use WebSQL or IndexedDB to cache map tiles
|
OpenBSD'sncsupports-Uto connect to UNIX-domain sockets, and should be reasonably portable. Source is incvs(seeanoncvs access), and Debian has sometarballs.
|
I'm trying to write (raw byte transfer, no fancy stuff) some data into a UNIX domain socket in Mac OS X (10.6) from the terminal (bash).socat is not available and doesn not compile straight from source in OS X. According to google some versions of netcat support UDSs but neither of these do once compiled from source:http://netcat.sourceforge.net/http://nc110.sourceforge.net/Any ideas?
|
Accessing a unix domain socket in Mac OS X
|
When using the SonarQube scanner for Maven, you can't specific properties that only apply to some of the modules using the command line.In the modules where you want to modify the sources, add in thepom.xmla property. For example, inmodule5/pom.xmladd:<properties>
<sonar.sources>src,gen</sonar.sources>
</properties>
|
I have one multimodule maven project where there are source directories apart from 'src' where java file resides.This is the folder structurefolder1
-pom.xmlpom.xml Contains modules defined like this:<modules>
<module>module1</module>
<module>module2</module>
<module>module3</module>
<module>module4</module>
<module>module5</module>
<module>module6</module>
<module>module7</module>
<module>module8</module>
<module>module9</module>
<module>module10</module>
</modules>Different modules are organized like this:module1
-src
-gen
module2
-src
module3
-gen
module4
module5
-src
-genSo, as you see, there are modules/projects which have either src or gen or both or doesn't have any of it.When I run findbugs analysis, it picked only java classes from 'src' and skipped 'gen' (Natural as Maven model forces the analyzer to pick from only src)So, in the Jenkins job configuration, I defined sources explicitly like this:-Dsonar.sources=src,gen
-Dsonar.exclusions=src/test/java/**When I run with this configuration, analysis fails for modules which doesn't have both src and gen. (module2, module3, module4)So, how do I run the analysis to pick either src or gen or skip that module if either of them is not found ?Thanks,Ron
|
How to Run SonarQube Findbugs Analysis for a project with multiple source directories
|
The newly allocated memory pointed to by output is not initialized: it may have any contents.
strlen requires its argument to be a pointer to a null-terminated string, which output is not, because it hasn't been initialized. The call strlen(output) causes your program to exhibit undefined behavior because it reads this uninitialized memory. Any result is possible.
|
I would like to have a dynamic character array who's length equals the loop iteration.
char* output;
for (short i=0; i<2; i++){
output = new char[i+1];
printf("string length: %d\n",strlen(output));
delete[] output;
}
But strlen is returning 16, where I would expect it to be 1 and 2.
|
Dynamic character array not giving correct string length?
|
Apparently I was not logged in - just npm kept the cached version of the package. Back to square one again. If you run into the same problem, try to clean the cache or bump the package version to test it out.
|
I'm building a project that uses a private GitHub package. I have been using it locally with npm login --registry=https://npm.pkg.github.com which, in hindsight, was not the smartest thing as I actually need to use it in the production environment. For that I use netlify and unfortunately, it throws 401 Unauthorized whenever I try to deploy it.
Now, the problem is that I have a very hard debugging it on my local machine because, for some unknown reason, I keep being authorized despite running npm logout --registry=https://npm.pkg.github.com. Trying to run logout again I get npm ERR! Not logged in to - and yet I still can download the package.
I don't have the auth token in my .npmrc file. How comes it is still working? What can I do to go back to being unauthorized?
|
Logging out of GitHub Packages for npm
|
According tonginx.confyou providedhere, try below:location ^~ /videos/ {
rewrite "^/videos/([a-zA-Z0-9]{23})\.mp4$" /get.php?token=$1 break;
}This should match URL:example.com/videos/abc01234567890123456789.mp4and redirect toexample.com/get.php?token=abc01234567890123456789DISCLAIMER:config not tested, may have some typos
|
I have nginx 1.2.1. I want to rewrite:http://mywebsite.com/[token].mp4tohttp://mywebsite.com/get.php?token=[token]Return error 404, my block:location ~* \.(mp4)$ {
rewrite "^/([a-zA-Z0-9]{23})\$" /get.php?token=$1 break;
}I triedthis questionbut nothing, it returns error 404
|
Nginx - Redirect url for certain file type
|
For example you have the csv file holding aliasesaliases.csvlooking like:alias1
alias2
alias3
etc.So you can addCSV Data Set Configto read this file and store the alias value into, sayaliasvariableAnd finally you can usealiasvariable value in theKeystore Configurationwhich will refer the value of the alias from the CSV file:More information:How to Use Multiple Certificates When Load Testing Secure Websites
|
I have ap12file, which is needed to execute tests.
I added following lines tosystem.propertiesfile.javax.net.ssl.keyStoreType=pkcs12
javax.net.ssl.keyStore=C:\certs\certificate.p12
javax.net.ssl.keyStorePassword=certificate_passwordIt was not working, so I createdjksfile from certificate withkeytooland set it in the same file.javax.net.ssl.keyStore=C:\certs\keystore.jks
javax.net.ssl.keyStorePassword=certificate_passwordI usedCSV Data Set Configto set also alias, which is used inKeystore Configurationcomponent, but not sure, what should be stored in csv data file, how to provide key aliasses.Options -> SSL Managerstores certificates until JMeter is closed, doesn't store those permanently.
|
Save SSL certificate in JMeter
|
There are two parts to an answer to your question:Pods must have individual, cluster-routable, IP addresses and one should bevery cautiousabout recycling themYou can, if you wish, not use any software defined network (SDN)So with the first part, it is usually a huge hassle to provision a big enough CIDR to house the address range required for supportingevery Podthat is running across every Namespace, and have the space be big enough to avoid recycling addresses for a very long time. Thus, having an SDN allows using "fake" addresses that one need not bother the "real" network with knowing about. No routers need to be updated, no firewalls, no DHCP, whatever.That said, as with the second part, you don't have to use an SDN: that's exactly what thecontainer network interface (CNI)is designed to paper over. You can use the CNI provider that makes you the happiest, including usingstatic IP addressesorthe outer network's DHCP server.But your comment about port collisions is pretty high up the list of reasons one wouldn't just want tohostNetwork: trueand be done with it; I'm actually not certain if the default kubernetes scheduler is aware ofhostNetwork: trueand the declaredports:on thecontainers:in order to avoid co-scheduling two containers that would conflict. I guess try it and see, or, better yet, don't try it -- use CNI so the next poor person who tries to interact with your cluster doesn't find a snowflake setup.
|
Using docker can simplify CI/CD but also introduce the complexity, not everybody able to hold the docker network though selecting open source solutions like Flannel, Calico.
So why don't use host network in docker, or what lost if use host network in docker.
I know the port conflict is one point, any others?
|
Why don't use host network in docker since docker and kubernetes network is so complex
|
It turns out that the tokens was invalid (not sure if it because of 12 hours expiration). If you simply F5 the browser page you didn't re-authenticated but still can access the console page, but actually the token should be updated by re-login ICP Portal again.The issue was fixed by re-access the ICP portal:https://<master host>:8443/console/This will re-allow you authenticate. After that, go to admin -> configure client, paste the latest commands you will find the token might be updated. Executing the new commands solved the issue.2 Question still left:a) If the page was long opened and token expired, ICP portal page may not auto refreshed to force you re-login, that means the token in set-credentials command are still old.b) Even setting old tokens are accepted and command never complain an error even warning. This may mislead us when token are changed on servers, e.g, If I saved the commands to a local txt file and re-execute it again (even after token expired), the commands still finished successful, but actually I still didn't get authenticated correctly when I try to login.
|
Today I met a strange issue about my Windows kubectl client suddenly raise authorization issue in connecting ICp.I was using ICP with a Widows configured kubectl.exe. Then, after a while, due to laptop automatic sleeping, my VPN connection was disconnected, hence lose connection to remote ICP. Later I came back and re-connect the ICP. I use kubectl command again and faced:error: You must be logged in to the server (Unauthorized)On ICP master node, nothing wrong if I used:kubectl -s 127.0.0.1:8888 -n kube-system get pods -o wideI went back to re-configure client (pasted the code copied from admin -> configure kubectl), commands executed successful but when I issuekubectl get podsstill error.I checked article:kubectl - error: You must be logged in to the serverkubectl error: "You must be logged in to the server (the server has asked for the client to provide credentials)"error: You must be logged in to the server (the server has asked for the client to provide credentials)It looks like didn't much helpful
|
kubectl error: You must be logged in to the server (Unauthorized)
|
You can do this with Illuminate\Cache which is a part of Laravel although can be used on it's own.In order to configure it you need to have the following composer libraries installed:predis/predisilluminate/redisilluminate/cacheHere is an example:<?php
require_once __DIR__ . '/vendor/autoload.php';
$servers = [
'cluster' => false,
'default' => [
'host' => '127.0.0.1',
'port' => 6379,
'database' => 0,
],
];
$redis = new Illuminate\Redis\Database($servers);
$cache = new Illuminate\Cache\RedisStore($redis);
$cache->tags('posts', 'author_1')->put('post_1', 'Post 1 by Author 1', 1);
$cache->tags('posts', 'author_2')->put('post_2', 'Post 2 by Author 2', 1);
var_dump($cache->tags('posts', 'author_1')->get('post_1'));
var_dump($cache->tags('posts', 'author_2')->get('post_2'));
$cache->tags('author_2')->flush();
var_dump($cache->tags('posts', 'author_1')->get('post_1'));
var_dump($cache->tags('posts', 'author_2')->get('post_2'));The result will be:php test.php
string(18) "Post 1 by Author 1"
string(18) "Post 2 by Author 2"
string(18) "Post 1 by Author 1"
NULL
|
I am looking for easy way to store cache inRedisand mark pieces of cache withtags, so when I needed I could easily delete all the cache marked with specific tag.Is there a good ready to use solution for that? (I am going to use access Redis with PHP)I would do it by myself, as I understand I need to store tags as sets, where values are keys of cache, that use the tag. I even can cover the situation when I delete cache and its key should be removed from tag's set (I can store list of tags in cached element for that). But I am not sure how to do it when cache expires, in this case its key will "stuck" in a tag and next time when I delete cache by tag - it will clean cache with key, where that key may not be used anymore.So I am looking for ready solution, at least to see how it is done.
|
Is there good solution for cache tagging on PHP/Redis?
|
You did what's called afast-forward merge. When you dogit mergeand one branch is a superset of the other, by default, Git just updates the branch you're merging into to be exactly the same as the other branch.If you want to create a merge commit in such a case, then you want to add the--no-ffoption to do so. That will result in a merge commit, which will give the graph the expected shape.
|
I am new to the git, and started a simple project, just to learn about branches and commits.
My problem is with the github network graph tool.Here is the log:Initial commit to mainOther commit adding stuff to the main branchCreating a second branch (layout-creation)Commited stuff to that branchPushed to the remote usinggit push --set-upstream origin layout-creationAfter that, my new goal was to merge the main branch with the layout-creation branch. For that, I used these command lines:git checkout maingit merge layout-creationgit push origin mainI expected the graph to look likethisbut it looks likethisWhat do I have to do in order to achieve the first graph (with command lines)?
|
Network graph from github
|
My approach is the following (for forked repositories):
git remote add upstream {{upstream-url}} # Point to the original repo
git merge --no-commit upstream # Merge changes from upstream with no auto commit
# Review changes...
git commit # Commit changes (no comment required)
|
This question already has answers here:
How do I update or sync a forked repository on GitHub?
(31 answers)
Pull new updates from original GitHub repository into forked GitHub repository
(8 answers)
Closed 2 years ago.
This post was edited and submitted for review 2 years ago and failed to reopen the post:
Original close reason(s) were not resolved
My goal is to clone or fork a project to make modifications to it, but I would like to keep benefitting from updates made to the original (be either from the master branch or others) and merge them to my own private repo.
Making a fork forces the repo to be public, something that is not an option
What is the best course of action to follow? What about merging updates from other forks of the same project?
Right now what I have done is download the source code and set up a new repo with it. This of course means that anytime i want to apply updates I have to download the updated files and overwrite my files with those, which is not ideal because if i modify my the same files in my repo i have to make the same modifications to every file I want to update. There must be a way to do this, but after hours of googling I havent found what I'm looking for
|
Merge original repo updates into a private cloned/forked repo [duplicate]
|
There are several ways to troubleshoot this:Check permissions, webhooks and kube controller. Details can be foundhereCheck if your firewall rule is not blocking the connection (on a proper port).Prometheus needs read access to all cluster components in order to get the metrics. Check the cluster roles.Check the service endpoint discovery configurations in the config map.make sure you are using the latest stable version of the Prometheus.Please let me know if that helped.
|
I was using prometheus for the monitoring of pod's cpu and network usage.
but the metrics like cpu_usage_seconds are not coming in prometheus.when i checked the the kubelet target's are down.I'm using stable/prometheus-operator from helm:
|
Prometheus targets showing 403 for kubelet
|
The following works for me :git clone https://android.googlesource.com/platform/packages/apps/DeskClock/This would download the entire repository.
Then you can checkout any branch you want.
|
Essentially I am trying to clone this android open source project to my desktop.https://android.googlesource.com/platform/packages/apps/DeskClock/+/android-4.3_r1I am not sure what exactly to do.I have tried:git clone https://android.googlesource.com/platform/packages/apps/DeskClock/+/android-4.3_r1But I got the error:fatal: remote error: Git repository not foundSo how would I have to create a repository and clone it in there?I am just completely unsure on how to do this locally in my desktop.
|
How to clone a android open source project to my desktop
|
0
It would be nice if you could give us a little bit more information about the execution environment first.
The issue, most likely, appears due to the strict standard limit for memory usage in V8, but first check this
Try to read this: https://github.com/exceljs/exceljs/issues/2041
Share
Follow
answered 18 hours ago
Serhii VoznyiSerhii Voznyi
1
1
Your answer could be improved with additional supporting information. Please edit to add further details, such as citations or documentation, so that others can confirm that your answer is correct. You can find more information on how to write good answers in the help center.
– Synapsis
17 hours ago
Add a comment
|
|
I'm tring to use writeBuffer method but it is giving me, Reached heap limit Allocation failed error. It is working on small excel but giving an error on big ones. Is there any solution you know? I'm using exceljs.
const fileAsBuffer = await workbook.xlsx.writeBuffer(); // giving out of memory error
|
exceljs giving out of memory in writeBuffer method
|
Forgot to edit this, as I found the true issue going on (Andy Shinn was correct that it was not a configuration issue).
The actual problem was not any of my docker containers or even anything in the Digital Ocean server itself, but rather an issue with Cloudflare. Cloudflare does not yet support Websockets, so any domains that make use them have to be grey-clouded in the Cloudflare DNS panel.
Reference
|
I have a Digital Ocean server running Ubuntu 14.04, and two web applications running through Docker containers. One is a Ghost container, the other is a Jupyter container (https://hub.docker.com/r/jupyter/notebook/). I'm also running an nginx-proxy container (https://github.com/jwilder/nginx-proxy).
The issue is that websockets aren't working, and Jupyter requires them to be enabled to work. I have Jupyter served at http://notes.rooday.com/, and accessing it works, but it can't connect to the ipython kernel due to the disabled websockets. I tried researching how to fix this, and the closest I got was this nginx config file https://paste.ubuntu.com/5620850/.
However, I'm not sure how to apply that config file to the nginx-proxy container, especially in a way that will not interfere with my Ghost container which is also behind the nginx-proxy (at http://blog.rooday.com/).
Can someone point me in the right direction?
|
How to allow websockets to specific subdomain behind an nginx proxy?
|
Try this :location / {
# This is cool because no php is touched for static content.
# include the "?$args" part so non-default permalinks doesn't break when using query string
try_files $uri $uri/ /index.php?$is_args$args =404;
}
if (!-e $request_filename) {
rewrite ^.*$ /index.php last;
}Having had a lot of failures configuring wordpress to work with nginx, the rewrite rule solves every issue with the 404 errors.ShareFollowansweredAug 5, 2017 at 14:22KrisKris43811 gold badge44 silver badges1515 bronze badges0Add a comment|
|
Although this issue has been answered many times but still its not working for me.
I am getting 404 on all pages except home page on Nginx.I am posting in my configuration below:server {
listen 80 ;
listen [::]:80;
root /var/www/html/p/swear;
index index.php index.html index.htm;
server_name skinnybikiniswimwear.org;
location / {
try_files $uri /$uri/ /index.php?args =404;
}
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
try_files $uri /index.php?args =404;
fastcgi_pass unix:/var/run/php/php7.1-fpm.sock;
fastcgi_index index.php;
include fastcgi_params;
}
}I am not able to find the issue in this configuration.Wordpress is installed at: /var/www/html/p/swearThanks
|
Nginx with wordpress 404 on all pages
|
As mentioned by @camobap the reason for the OutOfMemory was because Perm Gen size was set very low. Now the issue is resolved.
Thank you all for the answers and comments.
|
I am running an application using NetBeans and in the project properties I have set the Max JVM heap space to 1 GiB.
But still, the application crashes with Out of Memory.
Does the JVM have memory stored in system? If so how to clear that memory?
|
Does JVM store memory in system ? If so, how to clear it?
|
Your time column (e.g.created_at) should beTIMESTAMP WITH TIME ZONEtype*Use time condition, Grafana has macro so it will be easy, e.g.WHERE $__timeFilter(created_at)You want to have hourly grouping, so you need to write select for that. Again Grafana has macro:$__timeGroupAlias(created_at,1h,0)So final Grafana SQL query (not tested, so it may need some minor tweaks):SELECT
$__timeGroupAlias(created_at,1h,0),
count(*) AS value,
'succcess_total_count_per_hour' as metric
FROM logs
WHERE
$__timeFilter(created_at)
AND http_status = 200
GROUP BY 1
ORDER BY 1*See Grafana doc:https://grafana.com/docs/grafana/latest/datasources/postgres/There are documented macros. There are also macros for the case, when your time column is UNIX timestamp.
|
I have a Postgresql DataSource with the following table:It's kinda logs. All I want is to show on a chart how many successful records (withhttp_status== 200) do I have per each hour. Sounds simple, right? I wrote this query:SELECT
count(http_status) AS "suuccess_total_count_per_hour",
date_trunc('hour', created_at) "log_date"
FROM logs
WHERE
http_status = 200
GROUP BY log_date
ORDER BY log_dateIt gives me the following result:Looks good to me. I'm going ahead and trying to put it into Grafana:Ok, I get it, I have to help Grafana to understand where is the field for time count.
I go to Query Builder and I see that it breaks me query at all. And since that moment I got lost completely. Here is the Query Builder screen:How to explain to Grafana what do I want? I want just a simple chart like:Sorry for the rough picture, but I think you got the idea. Thanks for any help.
|
I can't show to Grafana what time field it should use for chart building
|
2
That is cached on the client via headers in the response that you can't "clear" it . As a workaround , you can firstly setting a suited max age of the response cache on client side , then use VaryByHeader or VaryByQueryKeys , each time you want to refresh the cache you should provide a different value for your header/querystring :
https://learn.microsoft.com/en-us/aspnet/core/performance/caching/middleware?view=aspnetcore-3.1
Share
Improve this answer
Follow
answered Jan 13, 2020 at 6:30
Nan YuNan Yu
26.9k99 gold badges7171 silver badges151151 bronze badges
Add a comment
|
|
I have a controller action which renders a partial view which fetches some data from the database asynchronously. Let's say these are menu items.
[Route("SomeData")]
[ResponseCache(Duration = 1000 * 60 * 60)]
public IActionResult SomeData()
{
//returns a partial view for my ajax call
}
The data does not change often, but a user might do something and know that it should result in a change in that partial view, i.e. new menu items should appear.
However, with a cached response, the data is not loaded from DB. I would like to add a 'refresh' button on the page so that a user can explicitly clear all cache.
I tried javascript to do window.reload(true); as well as this reply https://stackoverflow.com/a/55327928/2892378, but in both cases it does not work.
I need the behaviour identical to clicking Ctrl + Refresh button in Chrome.
Cheers
|
ASP NET Core - clear ResponseCache programmatically
|
1
+50
While changing the base branch of an existing PR is supported, changing the actual upstream repository is not for now (Q1 2022)
I would:
make a new fork of the target upstream repository
change origin of my local repository to that new fork and push my PR branches to it
make new PR from that new fork: this tie, the base repository is the right one every time.
possibly push the same feature branch to my old fork, which remains tied to the old upstream repository
Share
Follow
answered Feb 3, 2022 at 9:29
VonCVonC
1.3m539539 gold badges4.6k4.6k silver badges5.4k5.4k bronze badges
4
3
Im not sure this addresses the concern that when a new PR is opened its defaulted to the upstream as the base instead of the forked repo. Its not so much that we want the upstream to be changed, just the default behavior for new PR's on which base its comparing to.
– Chase
Feb 3, 2022 at 20:32
@Chase But the base repository will be the right one with this new setup, right?
– VonC
Feb 3, 2022 at 20:49
1
While it somewhat works, I guess the real answer is still just, No, you cant change the default base repo for new pull requests that you open. You will always need to change the base before submitting it.
– Chase
Feb 4, 2022 at 16:58
1
@Chase Indeed. I hope GitHub would consider this use case. Maybe it is tricky to implement though.
– VonC
Feb 4, 2022 at 17:14
Add a comment
|
|
I have a fork of an old, not-very-well-supported repository. In the fork, whenever I create a pull request from feature branch to master (i.e. the default branch), I have to specify base repository manually, every single time:
There's a similar issue with BitBucket; it has a well description, but the answers are out of scope for this one.
Can I change this behavior of GitHub UI somehow, so that new pull requests are created against specific repository?
I assume that I can achieve this by de-forking the repo, but I'd like to keep the fork relation a) due to historical reasons, b) out of respect to the original author, and c) because this is a highly error-prone process for now (there's no single button for that, unfortunately)
|
Can I set default repository for pull requests from fork?
|
It depends on what is installed by default.In our Solaris/Linux/Windows environment, we are using perl scripts, but not one per OS: only one script able to recognize the Os in which it is executed and to run the appropriate code depending on the platform.That allows to isolate the common function in a platform-independent part of the script, while dealing with the paths and other specific commands in Os-dedicated sub-routines, all in the same script, versioned once.The key is not to introduce Os-branches, which would make some "meta-data leak": some informations (the Os) which have nothing to do with the data versioned (in branches "test" or "fix" or ...) would coexist in special branches.That is not practical: what is the branch fix you need a special version of those scripts?Would you then make a "fix-Windows" and "fix-Unix" branch just for them, or would you rather simply modify said script in the "fix" branch, commit it and be done with it?
|
I'm part of a fairly large organization with developers distributed geographically and using a mix of Windows, OSX, and Linux development environments.I've asked a previous question that leads me to want to use clean/smudge filters:Mark a file in the GIT repo as temporarily ignoredBut... what's the best way to do cross-platform filter scripts? I'd prefer not to require developers to install extra scripting environments. Are there any best practices around this? Any way to make the filters run on the server side (we use github)?
|
GIT - Cross platform smudge/clean filters?
|
This is precisely the point of the (commercial)Portfolio Management(Views) plugin. However I don't know of free (as in beer) alternatives.
|
I have a doubt about merging reports produced by Sonar. I have a multi-module project and due to its complexity i want to produce a Sonar report (not only a coverage report using jacoco) for each module. After that i would like to merge all the reports (maybe in the parent directory or even outside the project) to see all the statistics in an integrated Sonar report. Anyone can help me doing this integration?
I now that it is possible using Ant and Jacoco reports but i want more than coverage reports, that's why i need to merge Sonar and not only jacoco reports...thanks in advance.Tiago
|
Merge Maven Sonar reports
|
You could use a php file to serve those images and do some checks before serving them. I would try something like this:
<?php
if ( /* YOUR CHECK HERE */ ) {
header('Content-Type: image/jpeg'); // Or whatever your content type might be
readfile('/path/to/file');
}
You could the use RewriteRule's to make those calls to your php file look like real images:
rewrite /img/users/pictures/(.*) /your_php_file.php?path=$1 break;
Or something like that.
This may be secure but not very efficient because your server has to access two file. The php file and the image file
|
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 11 years ago.
Hello web development gurus
What is the best practice for storing and serving images securely without hurting performance?
Is it possible to store user images in a folder that's not web accessible (possibly higher up and before /www?) and serve on demand after the user has logged in to the page? There is a username and password access mechanism already in place.
The users do not want these images to be publicly accessible.
I am running nginx with php on Ubuntu. Database is mysql.
Thank you!
|
Storing/serving web-site images securely and efficiently [closed]
|
1
You will need to do as much work with the data on the database side as you can. Then once you have the data try to write out the data as you are reading it from the database or at least in some sort of buffer so that you aren't loading up all the data in the Java program.
Share
Improve this answer
Follow
answered Mar 9, 2011 at 13:11
jzdjzd
23.5k99 gold badges5555 silver badges7676 bronze badges
Add a comment
|
|
I have a requiement where in one of the report, I need to fetch around 10 million records from the database and transfer them to Excel.
The application is client-server model where server side logic is written in EJB & client is written in Swing.
Now my question is when I try to fill the Object of Java from Resultset , IF the size of resultset is more ( > 100000) then It throws Out of Memory Error on Java side.
Can someone let me know how this scenerion should be handle in Java? I need to transfer all the records from the server to the client, and then I need to build the Excel report based on the data retrieved from server side.
|
Java - Huge Data Retrieval
|
You can just use Github Pull Request do this! It use --no-ff by default if you didn't change config.
The following text is from Github docs:
When you click the default Merge pull request option on a pull request on GitHub, all commits from the feature branch are added to the base branch in a merge commit. The pull request is merged using the --no-ff option.
If you want to always merge branch with --no-ff, you can disable other merge method (squash and rebase) in your repo setting. See this docs about squashing merge and configure it.
|
I'm wondering if there is an equivalent to performing the following
to do a non-fast-forward merge into the current branch for preserving my branch topology:
git merge --no-ff <some-branch>
...without using the git CLI or desktop apps, purely within the GitHub web interface?
How does one perform this for PRs made on the web interface?
Any pointers or clarifications would be much appreciated.
|
Using GitHub website for non-fast-forward merging of PRs, instead of Git CLI
|
2
Took a quick look and it seems to be a case of inadequate documentation (none?), that hopefully gets remedied as the project matures. At a very quick glance at the code, it seems to be a standard app (jobsapp tree). So you might start playing with it by creating a Django virtual environment containing its requirements, putting the jobsapp tree into the relevant place (where startapp name would put name) and including its urls into that environment's master urls.py file. Also jobsapp (or jobs?) in INSTALLED_APPS in the master settings.py
Then, makemigrations and migrate, to create its DB tables. If makemigrations crashes, I've clearly forgotten something.
Note, it's urls.py identifies itself as jobs, not jobsapp. Not sure of the implications.
Then either fire up runserver and point a web-browser at it, or fire up its tests. If tests fail or the test server crashes, you'll have to work out why. You will soon know if it is useful to yourself, or for whatever reason not functioning well enough to spend more of your time time on.
Reading the installation instructions for a well-known widely-used third party app like django-filters may be helpful. Modulo some names, installation instructions will be pretty much the same for any well-written app (i.e. one that sticks to the conventions).
Share
Improve this answer
Follow
answered Nov 19, 2019 at 12:12
nigel222nigel222
7,77711 gold badge1414 silver badges2424 bronze badges
0
Add a comment
|
|
Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 3 years ago.
Improve this question
I want to run a specific project after downloading it from Github. I read all the instructions also but i didn't understand how to run it with my existing interpreter. I have installed all the required packages also. Please help someone and please don't flag my comment as useless.
Django Job Portal
|
How can i run a project after clonning from github? [closed]
|
Have you tried with ObjectChangeTracking turned off (readonly mode)?
|
BackgroundOk so I've got a simple LINQ-to-SQL DataContext with one table, containing about 900mb worth of PDF documents in a VARBINARY field, along with some other identifiers.DeferredLoadingEnabledis set totrue. The point of the code is to export all the documents to PDF files on our server.This isn't the first time I've done bulk "script" like stuff using linq-to-sql. It's a great tool for simply iterating over many records.ProblemMy problem is after approx1400iterations of myforeach (var c in ctx.Documents)which takes theReportfield and usesFile.WriteAllBytes(docName, c.Report.ToArray());to write it to disk, I get anOutOfMemoryException.As this is an internal piece of code, I simply used a.Skip(1426)on my selection and it finished successfully. Needless to say, when observing my program crash, I had indeed run out of memory.Are there any good ways to avoid this in the future, or is LINQ-to-SQL bound by this restriction?One possible answerI can think of is to set an iteration limit and re-instanciate my DataContext every 500 records or so. Doesn't sound very neat though...
|
Releasing LINQ-to-SQL resources to avoid OutOfMemoryException
|
Do not use the following reserved names for the name of a file:CON, PRN, AUX, NUL, COM1, COM2, COM3, COM4, COM5, COM6, COM7, COM8,
COM9, LPT1, LPT2, LPT3, LPT4, LPT5, LPT6, LPT7, LPT8, and LPT9. Also
avoid these names followed immediately by an extension; for example,
NUL.txt is not recommended.https://learn.microsoft.com/en-us/windows/win32/fileio/naming-a-fileGit expectedly has many issues with files like aux.js, con.cpp, etc on Windows. Your best bet is to rename this file in the repo.
|
I cloned a project and I can see using the Web UI that there are 3 files.After cloning, I noticed I only got 2 files. I checked the branch and I'm on master and I should have 3 files.Here is output ofgit status:On branch master
Your branch is up-to-date with 'origin/master'.
Changes not staged for commit:
(use "git add/rm <file>..." to update what will be committed)
(use "git checkout -- <file>..." to discard changes in working directory)
deleted: con.js
no changes added to commit (use "git add" and/or "git commit -a")But I didn't delete any file !git ls-files --deleted:con.jsOk, weird bug, I then try to restore the file that strangely got deleted while cloning so I typegit checkout con.js.After I typegit checkout con.js, the full file get printed in the terminal. I suspect it gets deleted again immediatly.
If I then typegit ls-files --deleted,con.jsit is still there and not restored !What could I be possibly doing wrong ?
|
git file getting deleted again and again
|
I believe what's happening is that first time you deploy your app, AutoScaling picks one instance to be a leader and new cron job is created on that instance. Next time you deploy your app, AutoScaling picks another instance to be a leader. So you end up with the same cron job on two instances.So the basic test would be to ssh to all instances and check theircrontabcontents withcrontab -lYou can avoid duplicate cron jobs by removing old cron jobs on the instance regardless whether it is a leader or not.container_commands:
00_remove_old_cron_jobs:
command: "crontab -r || exit 0"
01_some_cron_job:
command: "echo '*/5 * * * * wget -O - -q -t 1 http://example.com/cronscript/' | crontab"
leader_only: trueAs mentioned inRunning Cron In Elastic Beanstalk Auto-Scaling Environment:|| exit 0is mandatory because if there is no crontab in the machine thecrontab -rcommand will return a status code > 0 (an error). Elastic Beanstalk stop the deploy process if one of the container_commands fail.Although I personally never experienced a situation whencrontabwas not found on Elastic Beanstalk Instance.You can run/opt/elasticbeanstalk/bin/leader-test.shto test whether it is a leader instance or not.Hope it helps.
|
I have one application on elastic beanstalk and cron jobs for it.The code of setting cron iscontainer_commands:
01_some_cron_job:
command: "echo '*/5 * * * * wget -O - -q -t 1 http://site.com/cronscript/' | crontab"
leader_only: trueThis script calls the mail sender. And I'm receive two message per time.code ofhttp://site.com/cronscript/looks like (php code)require_once('ses.php');
$ses = new SimpleEmailService(EMAIL_SHORTKEY, EMAIL_LONGKEY);
$m = new SimpleEmailServiceMessage();
$m->addTo('[email protected]');
$m->setFrom('[email protected]');
$m->setSubject('test message');
$m->setMessageFromString('', 'message content');
$send_emails=($ses->sendEmail($m));When I callhttp://site.com/cronscript/from browser's address bar, I receive one message as I want.
|
elastic beanstalk cron run twice
|
If you want to use the equivalent of theCachefacade you should injectIlluminate\Cache\Repositoryinstead:use Illuminate\Cache\Repository as CacheRepository;
// ...
protected $cache;
public function __construct(CacheRepository $cache)
{
$this->cache = $cache;
}You can look up the underlying classes of facades in the documentation:Facades - Facade Class ReferenceShareFollowansweredMar 10, 2015 at 10:54lukasgeiterlukasgeiter150k2727 gold badges340340 silver badges273273 bronze badges24Thank you! I wish the Laravel documentation would expand on how to inject services rather than relying on facades for the more advanced developers. Comes in handy when you're trying to extract modules into packages.–Dylan PierceJan 8, 2016 at 14:39If only they could use the bound class name instead of the string alias (Illuminate\Cache\Repository ::classinstead ofcache.store) in the framework code. Everything would be easier if I could jump into the Facade, see the class referenced there, and inject that.–Moritz FriedrichFeb 12, 2021 at 8:48Add a comment|
|
I'd like to get away from using the Cache facade and inject it into my controller using the constructor, like this:use Illuminate\Contracts\Cache\Store;
...
protected $cache;
public function __construct(Store $cache)
{
$this->cache = $cache;
}I'm then using an app binding in AppServiceProvider.php.public function register()
{
$this->app->bind(
'Illuminate\Contracts\Cache\Store',
'Illuminate\Cache\FileStore'
);
}However, I get the following error because FileStore.php expects $files and $directory parameters in the constructor.BindingResolutionException in Container.php line 872:
Unresolvable dependency resolving [Parameter #1 [ $directory ]] in class Illuminate\Cache\FileStoreAny idea how I would get around this?
|
Injecting cache as a dependency in Laravel 5
|
You can try something like this:{k8s_container_name="SOME_CONTAINER_NAME"} |
label_format custom_label = `
{{ if contains "GET" .httpMethod}} GET URL
{{ else if contains "POST" .httpMethod}} POST URL {{end}}`
|
In Grafana I added an Exclude parameter to the Dashboard. If the Exclude field is empty, I would want it to do nothing, otherwise exclude lines that contain the regex in Exclude field.I would want to write something like:{label="this"} ( if "$Exclude" != "" then !~ "$Exclude" else <do nothing> fi )How could I write this in LogQL? I tried reading documentation, but to no avail. Currently, I just set the default value of Exclude to a unique UUID, however would be nice for it to be empty.
|
How to write an IF in LogQL query?
|
Mapping a volume works to make files available to the container, not the other way round.You can fix your issue by running "npm install" as part of the CMD. You can achieve this by having a "startup" script (eg start.sh) that runs npm install && npm run start. The script should be copied in the container with a normal COPY command and be executable.When you start your container you should see files in the node_modules folder (on host).ShareFollowansweredApr 4, 2019 at 12:17MihaiMihai10.1k22 gold badges2020 silver badges4242 bronze badgesAdd a comment|
|
I am trying to run a Node.js in a Docker container via Docker Compose.
Thenode_modulesshould be created in the image and the source code should be synced from the host.Therefore, I use 2 volumes in docker-compose.yml. One for my project source and other for thenode_modulesin the image.Everything seems to be working. Thenode_modulesare installed andnodemonstarts the app. In my docker container I have anode_modulesfolder with all dependencies. On my host an emptynode_modulesis created (I am not sure if this is expected).But, when I change a file from the project. Thenodemonprocess detects a file change and restarts the app. Now the app crashes because it can't find modules. Thenode_modulesfolder in the Docker container is empty now.What am I doing wrong?My folder structure looks like this/
├── docker-compose.yml
├── project/
│ ├── package.json
│ ├── Dockerfiledocker-compose.ymlversion: '3'
services:
app:
build: ./project
volumes:
- ./project/:/app/
- /app/node_modules/project/Dockerfile# base image
FROM node:9
ENV APP_ROOT /app
# set working directory
RUN mkdir $APP_ROOT
WORKDIR $APP_ROOT
# install and cache app dependencies
COPY package.json $APP_ROOT
COPY package-lock.json $APP_ROOT
RUN npm install
# add app
COPY . $APP_ROOT
# start app
CMD ["npm", "run", "start"]project/package.json...
"scripts": {
"start": "nodemon index.js"
}
...
|
Docker Compose node_modules in container empty after file change
|
I just had to add tty: true to my docker-compose.ymlversion: '2'
services:
ubuntu:
image: ubuntu:16.04
tty: trueDocker version 1.12.5, build 7392c3bdocker-compose version 1.7.1, build 0a9ab35
|
Q. How to run docker-compose in detach modeI am trying to run docker-compose in detach mode but itwill exits after just it's run, but I am able run same image in detach mode using 'docker run' command.Run image using 'docker run' command(works in detach mode)docker run -itd ubuntu:16.04below is output of 'docker ps -a' commandCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d84edc987359 ubuntu:16.04 "/bin/bash" 4 seconds ago Up 3 seconds romantic_albattaniRun same image using 'docker-compose up -d' command(didn't work in detach mode)below is my docker-compose.yml fileversion: '3'
services:
ubuntu:
image: ubuntu:16.04'docker-compose ps' command outputName Command State Ports
----------------------------------------------------
composetesting_ubuntu_1 /bin/bash Exit 0Update: When using tty: true parameter in docker-compose.yml file as belowversion: '3'
services:
ubuntu:
image: ubuntu:16.04
tty: truethen console will not execute any command, like if I type 'ls -l' command console will not responding.
|
Docker compose detached mode not working
|
The javadoc for File.renameTospecifically says that it may not be able to move a file between different volumes, and that you should use Files.move if you need to support this case in a platform independent way.
|
I have two docker containers: producer and consumer.Consumer container has two volumes:VOLUME ["/opt/queue/in", "/opt/queue/out"]docker-compose.ymlconsumer:
image: consumer
producer:
image: producer
volumes_from:
- consumerProducer puts file in/opt/queue/indirectory and consumer reads file from that dir and moves it to the/opt/queue/out. The problem is that consumer is written in Java and following Java code returns-1(operation failed).new File('/opt/queue/in/in_file').renameTo(new File('/opt/queue/in/in_file'));When I try to move file from command line there is no error. File is moved correctly. Why this is happening? How can I diagnose what is the problem?
|
Docker - cannot move file between volumes from java
|
3
Typically, for human normal software dev projects you create the repo on GitHub, clone it locally and then save your project files to the local repo. Done. All that is needed now is to do the assorted git add, git commit, git push magic to push your code to the remote repo.
Typically you create a repo wherever, git init your project directory, set the remote to point at your repo, commit, and push. This still works for GameMaker. No moving files around.
But the "GameMaker 2" IDE seems to intentionally want to prevent you from doing this. It is as if the designers sat around thinking about how best to obstruct the user from setting up a project hosted on GitHub. It is as if they don't want you sharing your files with team mates or have the ability to safely store your code online. AAAARG!
Not really, there's even a built-in plugin for git integration (see File > Preferences > Plugins > git) if you wanted those "changed file" badges in the resource tree for some reason.
Anyway, angry ranting aside... I can save my code to the repo I've created, no problem. But when it comes to the "GameMaker 2" IDE opening up a project, it doesn't want to recognize any projects that are not default saved to the GameMaker local directory that's created on the user's computer during initial install and project creation. The same thing happens when I use the export feature to export my project to the local git repo.
Are you referring to Recent Projects list? There's an "Open" button right next to it that lets you point it at what you want to open.
Share
Improve this answer
Follow
answered Sep 22, 2019 at 7:56
YellowAfterlifeYellowAfterlife
3,10911 gold badge1717 silver badges2424 bronze badges
Add a comment
|
|
So, I'm an ace with git. I've used it with the CLI every single day for years to manage hundreds of software development projects. But now comes the "GameMaker 2" IDE... and it is beyond me, how the hell I'm supposed to integrate it with GitHub?
Typically, for human normal software dev projects you create the repo on GitHub, clone it locally and then save your project files to the local repo. Done. All that is needed now is to do the assorted git add, git commit, git push magic to push your code to the remote repo.
But the "GameMaker 2" IDE seems to intentionally want to prevent you from doing this. It is as if the designers sat around thinking about how best to obstruct the user from setting up a project hosted on GitHub. It is as if they don't want you sharing your files with team mates or have the ability to safely store your code online. AAAARG!
Anyway, angry ranting aside... I can save my code to the repo I've created, no problem. But when it comes to the "GameMaker 2" IDE opening up a project, it doesn't want to recognize any projects that are not default saved to the GameMaker local directory that's created on the user's computer during initial install and project creation. The same thing happens when I use the export feature to export my project to the local git repo.
Is there an invisible file or something that gets nested in a "GameMaker 2" project folder that tells the IDE that this is a real project that can be opened?
Can anyone tell me what files I need to include in a GameMaker 2 project stored in my local git repo so that the "GameMaker 2" IDE can open them? Or tell me how to effectively integrate the IDE with GitHub? This BS is seriously pissing me off. It should not be this difficult to use a standard tool like git with a premium game engine IDE!
Thanks,
Wulf
|
How to save a GameMaker 2 Project to a Git repo?
|
1
You should be able to use the pull_request event with the ready_for_review or even review_requested tags.
This example will only run when a pull request is marked ready for review.
on:
pull_request:
types: [ready_for_review]
Draft pull requests
Pull request trigger event
Share
Improve this answer
Follow
answered Aug 15, 2020 at 13:58
The OtterlordThe Otterlord
1,9131212 silver badges2222 bronze badges
Add a comment
|
|
Currently, our team has limited GitHub actions in minutes, so I would only like to run GitHub actions when the WIP flag is not present.
Currently we use this plugin WIP to check if a branch is work in progress.
Is there a way that if the commit is flagged as WIP, that the GitHub actions to not trigger so we can conserve our monthly minutes allowance?
|
How to stop GitHub actions starting if a GitHub check has failed?
|
3
Your issue is likely that you are using external DNS which routes your request to your public IP and then back to your website. Setup internal DNS and point the site resolution to the internal IP directly.
Then as you stated, you can do the following:
cat << 'EOF' >/etc/nginx/private.conf
allow 192.168.1.0/24;
deny all;
EOF
site.conf:
include /etc/nginx/private.conf;
Share
Improve this answer
Follow
answered Apr 20, 2020 at 6:35
FreeSoftwareServersFreeSoftwareServers
2,42722 gold badges3333 silver badges6363 bronze badges
Add a comment
|
|
I have a single physical server running several server blocks in nginx corresponding to different subdomains. One of them I'd like to be only accessible from devices on the same local network as the server. I know theoretically this can be done with
allow 192.168.1.0/24;
deny all;
within a location block. When I actually try to access the server from a local device, though, the request is denied. Looking at the access logs, this is because the request is shown as coming from my network's external IP rather than the device's internal IP. How can I fix this?
|
Allowing only local network access in NGINX
|
if your/etc/systemd/system/kubelet.service.d/10-kubeadm.confis loading environment from/etc/sysconfig/kubelet, as does mine, you can update it to include your extra args.# /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS=--root-dir=/data/k8s/kubeletEntire10-kubeadm.conf, for reference:# /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
# Note: This dropin only works with kubeadm and kubelet v1.11+
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
# This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
# the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
EnvironmentFile=-/etc/sysconfig/kubelet
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS
|
kubernetes 1.7.xkubelet store some data in /var/lib/kubelet, how can I change it to somewhere else ?Because my /var is every small.
|
how to change kubelet working dir to somewhere else
|
When it says that, just open the shell and dogit status. That will give you a decent idea of what could be wrong and the state of your repo.I can't give you a specific error for this as it happens for many reasons in Github for Windows, like say some problem in updating submodules etc.
|
I am using Github Windows 1.0.38.1 and when I click the 'Sync' button after committing, I get the errorHow do I debug this problem? If in the shell, what should I do?The sync works fine if i do agit pushorgit pull, but the next time I want to sync using Github windows, I get the same error.
|
Github Windows 'Failed to sync this branch'
|
you might want to use theregexstage:- job_name: my-job
pipeline_stages:
- regex:
# extracts only log_level from the log line
expression: '\s+(?P<log_level>\D+)\s.*'
- labels:
# sources extracted log_level as label 'level' value
level: log_levelthe expression above matches onlylog_level, but you may add more named capture groups and use them in the same wayi.e.^(?P<time>\d{4}-\d{2}-\d{2}\s\d{2}:\d{2}:\d{2}\.\d{3})\s+(?P<log_level>\S+)\s(?P<rest>.*)$or less strict:(?P<time>\S+\s\S+)\s+(?P<log_level>\D+)\s(?P<rest>.*)matchtime,log_leveland therestof the line an extract them for later use.checkregex101 playground
|
I’m using grafana loki to compose dashboards.
I need to group the logs by level to create the graph but in the details of the logs I can not see the level label:my logs are like this:2021-05-31 14:23:00.005 INFO 1 --- [ scheduling-1] AssociationService : Scheduler Association finish at 31-05-2021 02:23:00There is a way to extrapolate the level and associate it to the label "level"?
|
How to add the level tag in Promtail config
|
The reason you get a 500 error is because the first rule that you apply is blindly adding a .php extension to whatever that isn't a file. So/user-projects/1/matches the first rule and gets a php extension tacked onto the end, and then the same thing happens again, and again.You should either swap the order of the two rules, or make your php extension rule more precise:RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_URI} ^/(.+?)/?$
RewriteCond %{DOCUMENT_ROOT}/%1.php -f
RewriteCond ^(.+?)/?$ $1.php [L]That checks first that if you add a .php to the end, it actually points to a file that exists.
|
I have used the following code in .htaccess,Options +FollowSymlinks
RewriteEngine on
RewriteCond %{REQUEST_FILENAME} !-f
RewriteRule ^(.+?)/?$ $1.php [L]
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule ^user-projects/([0-9]+)/?$ user-projects.php?uid=$1 [L,QSA]This above code works for the urlmydomain.com/pagenamebut if the url is likemydomain.com/user-projects/1it gives '500 internal server error'Can anybody tell me where I am doing wrong here?Thanks.
|
How to set rewrite rule in .htaccess for the 'pagename/id'?
|
The easiest way to do this is with Microsoft'ssp_help_revlogin, a stored procedure that scripts all SQL Server logins, defaults and passwords, and keeps the same SIDs.You can find it in this knowledge base article:http://support.microsoft.com/kb/918992
|
I have backed up and restored a MS SQL Server 2005 database to a new server.What is the best way of recreating the login, the users, and the user permissions?On SQL Server 2000's Enterprise Manager I was able to script the logins, script the users and script the user permissions all seperately. I could then run one after the other and the only remaining manual step was to set the login password (which do not script for security reasons)This does not seem possible in SQL Server 2005's Management Studio, making everything very fiddly and time consuming. (I end up having to script the whole database, delete all logins and users from the new database, run the script, and then trawl through a mixture of error message to see what worked and what didn't.)Does anyone have any experience and recommendations on this?
|
Restoring a Backup to a different Server - User Permissions
|
If you want, you can usegit subtreeinstead of git submodule. This is a little bit more convenient to use, and doesn't require people who checkout from your repository to know anything about submodules or subtrees. It also makes it easier to maintain your own patches to the subproject until you're ready to submit them upstream.
|
I'm using github as a repo for a little project, but I'd also like to use some code written by another github user.Is it possible to setup a /library/libraryname folder inside my project which maintains it's links back to the other users repo as well as being part of my projects commits?For example: If the other user updates their code later on, I'd like to be able to easily fetch their changes while still keeping it in the same repo as my main project.
|
Multiple git repo in one project
|
You could simply create anew fork: a fork of the new official repo, and:report your commits from your old fork to the new fork (in a dedicated branch),rebase that branch on top of the fork master branch,the make a PR from the new fork dedicated branch to the new official repo.The idea of the rebase step is to make sure your PR will apply easily on top of the most recent commit of the master branch of the new official repo.ShareFolloweditedMay 23, 2017 at 12:10CommunityBot111 silver badgeansweredOct 5, 2014 at 4:20VonCVonC1.3m539539 gold badges4.6k4.6k silver badges5.4k5.4k bronze badgesAdd a comment|
|
See the relating questionhere.The scenario described there is saying that he wanted to take ownership of a project and convert his forked repo into a "normal" node, and the answer suggested that this can only be done by requesting Github support.The further problem of this is, if I was forked the project's original fork master (since that one is the "official" one), and later I was noticed that a new guy is now owning the project and I need to switch to his repo, rather than my original one. So what should I do?I know I can simply add more "remote"s in git but my pull requests will need to sent to the new repo, not the old one, so this is not a solution.
|
Github switching fork parent
|
I had the same issue, for some reason docker used thenode_modulesfolder from the project instead of its own (withRUN npm installcommand).
I've solved it by adding a.dockerignorefile and ignoring thenode_modulesof the project.//.dockerignore
node_modules/*
|
I am dockering a Vite app with Vue. When I runyarn devfrom my system, everything is Ok, but when I launch the same command from my dockerfile, I got the following erroryarn run v1.22.5
warning package.json: No license field
$ vite
failed to load config from /app/vite.config.ts
error when starting dev server:
Error: spawn Unknown system error -8My dockerfile isFROM node:14.16.0-alpine3.13
WORKDIR /app
COPY . .
CMD ["yarn", "dev"]And my docker-compose.yml isversion: '3.8'
services:
client:
build:
context: ./dockerfiles
dockerfile: client.dockerfile
volumes:
- ./client:/app
ports:
- '3000:3000'My folder structure isclient
|-public
|-src
|-node_modules
|-package.json
|-vite.config.ts
|- ... rest of files
dockerfiles
|-client.dockerfile
docker-compose.yml (at root level)
|
Docker-compose on Vite
|
The document in here explains very clearlyhttps://www.kernel.org/doc/Documentation/arm64/memory.txtTranslation table lookup with 4KB pages:+--------+--------+--------+--------+--------+--------+--------+--------+
|63 56|55 48|47 40|39 32|31 24|23 16|15 8|7 0|
+--------+--------+--------+--------+--------+--------+--------+--------+
| | | | | |
| | | | | v
| | | | | [11:0] in-page offset
| | | | +-> [20:12] L3 index
| | | +-----------> [29:21] L2 index
| | +---------------------> [38:30] L1 index
| +-------------------------------> [47:39] L0 index
+-------------------------------------------------> [63] TTBR0/1L0 - PGD, L1 - PUD, L2 - PMD, L3 - PTEAarch64 uses only 0-39 bits(3-level paging). Hence For aarch systems,PGD(L0) = PUD(L1) = [38:30]. Rest of the mapping remains the same.ShareFollowansweredNov 26, 2015 at 11:26SandeepSandeep18.8k1616 gold badges7070 silver badges109109 bronze badgesAdd a comment|
|
what are pgd, pmd pte and page shift bits in a 64-bit virtual address on armV8 CPU with 4-level paging?I need this information to debug a issue at hand.
|
Page table bits in linux virtual address (4-level paging)
|
2
Your pipeline caches only yarn's cache, not node_modules. Jest binary is supposed to be in node_modules, so it (along with other deps) doesn't get restored from cache. This is according to actions/cache guidelines, which suggests caching yarn cache and then doing yarn install.
actions/setup-node can already handle yarn caching, no need to roll your own logic for that.
- uses: actions/checkout@v2
- uses: actions/setup-node@v2
with:
node-version: 12.x
cache: yarn
- name: Install dependencies
run: yarn install --frozen-lockfile
- run: yarn run test
If you really want to cache node_modules instead of yarn's cache, then cache the directory manually
steps:
- uses: actions/checkout@v2
- uses: actions/setup-node@v2
with:
node-version: 12.x
- uses: actions/cache@v2
id: yarn-cache
with:
path: node_modules
key: ${{ runner.os }}-node_modules-${{ hashFiles('**/yarn.lock') }}
- name: Install dependencies
if: steps.yarn-cache.outputs.cache-hit != 'true'
run: yarn install --frozen-lockfile
- run: yarn run test
Share
Follow
answered Dec 6, 2021 at 17:14
Vilius Sutkus '89Vilius Sutkus '89
78466 silver badges1717 bronze badges
Add a comment
|
|
Using Github actions to publish npm package, It works and runs jest test cases without errors. So I decided to add yarn cache to optimize build time and the cache process works, but jest fails with below error.
$ jest --config=jest.config.js
/bin/sh: 1: jest: not found
error Command failed with exit code 127.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
##[error]Process completed with exit code 127.
Here is my yml
name: NPM Publish
on:
push:
branches:
- master
jobs:
build:
name: Build
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: actions/setup-node@v1
with:
node-version: 12.x
- name: Get yarn cache directory
id: yarn-cache-dir-path
run: echo "::set-output name=dir::$(yarn cache dir)"
- uses: actions/cache@v1
id: yarn-cache
with:
path: ${{ steps.yarn-cache-dir-path.outputs.dir }}
key: ${{ runner.os }}-yarn-${{ hashFiles('**/yarn.lock') }}
restore-keys: |
${{ runner.os }}-yarn-
- name: Install dependencies
if: steps.yarn-cache.outputs.cache-hit != 'true'
run: yarn install --frozen-lockfile
- name: Test cases
run: yarn run pdn:test
|
Github actions - /bin/sh: 1: jest: not found
|
Change the error code:<HttpErrorCodeReturnedEquals>403</HttpErrorCodeReturnedEquals>S3 doesn't generate a 404 unless the requesting user is allowed to list the bucket. Instead, it generates a 403 ("Forbidden"), because you're not allowed to know whether the object exists or not. In this case, that's the anonymous user, and you probably don't want to allow anonymous users to list the entire contents of your bucket.If the object you request does not exist, the error Amazon S3 returns depends on whether you also have thes3:ListBucketpermission.If you have thes3:ListBucketpermission on the bucket, Amazon S3 will return an HTTP status code 404 ("no such key") error.if you don’t have thes3:ListBucketpermission, Amazon S3 will return an HTTP status code 403 ("access denied") error.—http://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectGET.htmlSee also:Amazon S3 Redirect rule - GET data is missing
|
I am trying to get an S3 bucket when it encounters a 404 rather than throwing up a 404 page it redirects to my own server so I can then do something with the error.This is what I have cobbled together, what I think it should do is go to mydomain.com and hit the error.php and let the php script workout the filename the user was trying to access on S3.I would like this to happen no matter what directory the request comes from. When I have an error document defined in website hosting the 404 page shows up and when I don't have a 404 page defined I get an access denied xml error.This is my current redirection rule<RoutingRules>
<RoutingRule>
<Condition>
<HttpErrorCodeReturnedEquals>404</HttpErrorCodeReturnedEquals>
</Condition>
<Redirect>
<HostName>www.mydomain.com</HostName>
<ReplaceKeyPrefixWith>error.php#!/</ReplaceKeyPrefixWith>
</Redirect>
</RoutingRule>
</RoutingRules>Can anyone give me a hint as to what I am missing please?
|
Amazon S3 redirect 404 to different host
|
Method 1
byte[] data = new byte[8192];
Random rng = new Random();
using (FileStream stream = File.OpenWrite(filePath))
{
for (int i = 0; i < fileSizeMb * 128; i++)
{
rng.NextBytes(data);
stream.Write(data, 0, data.Length);
stream.Flush(); // BEETLE JUICE
}
}
Method 2
const int blockSize = 1024 * 8;
const int blocksPerMb = (1024 * 1024) / blockSize;
int count = fileSizeMb * blocksPerMb;
byte[] data = new byte[blockSize];
Random rng = new Random();
using (StreamWriter sw1 = new StreamWriter(filePath))
{
// There
for (int i = 0; i < count; i++)
{
rng.NextBytes(data);
sw1.BaseStream.Write(data, 0, data.Length);
sw1.baseStream.Flush(); // BEETLE JUICE
}
}
Reading
Do not read the whole file into memory, just read 4096 bytes at a time. Sample code at http://www.csharp-examples.net/filestream-read-file/
|
I am attempting to write and then read a large random file to calculate disk speed. I have tried several algorithms but keep getting an out or memory exception when attempting to write a 1GB file. Here are a few I have tried
Method 1
byte[] data = new byte[8192];
Random rng = new Random();
using (FileStream stream = File.OpenWrite(filePath))
{
for (int i = 0; i < fileSizeMb * 128; i++)
{
rng.NextBytes(data);
stream.Write(data, 0, data.Length);
}
}
Method 2
const int blockSize = 1024 * 8;
const int blocksPerMb = (1024 * 1024) / blockSize;
int count = fileSizeMb * blocksPerMb;
byte[] data = new byte[blockSize];
Random rng = new Random();
//using (FileStream stream = File.OpenWrite(filePath))
using (StreamWriter sw1 = new StreamWriter(filePath))
{
// There
for (int i = 0; i < count; i++)
{
rng.NextBytes(data);
sw1.BaseStream.Write(data, 0, data.Length);
//stream.Write(data, 0, data.Length);
}
}
Reading
using (StreamReader sr = new StreamReader(filePath))
{
String line = sr.ReadToEnd();
}
|
Writing Large File To Disk Out Of Memory Exception
|
1
An image only has one ENTRYPOINT (and one CMD). In the situation you describe, your new entrypoint needs to explicitly call the old one.
#!/bin/sh
# new-entrypoint.sh
# modify some files in the container
sed -e 's/PLACEHOLDER/value/g' /etc/config.tmpl > /etc/config
# run the original entrypoint; make sure to pass the CMD along
exec original-entrypoint.sh "$@"
Remember that setting ENTRYPOINT in a derived Dockerfile also resets CMD so you'll have to restate this.
ENTRYPOINT ["new-entrypoint.sh"]
CMD original command from base image
It's also worth double-checking to see if the base image has some extension facility or another path to inject configuration files. Most of the standard database images will run initialization scripts from a /docker-entrypoint-initdb.d directory, which can be bind-mounted, and so you can avoid a custom image for this case; the nginx image knows how substitute environment variables; in many cases you can docker run -v to bind-mount a directory of config files into a container. Those approaches can be easier than replacing or wrapping ENTRYPOINT.
Share
Improve this answer
Follow
answered Nov 9, 2021 at 11:45
David MazeDavid Maze
143k3535 gold badges189189 silver badges237237 bronze badges
Add a comment
|
|
I have created an image that has an entrypoint script that is run on container start. I use this image for different purposes. Now, I want to extend this image, but it needs to modify some files in the container before starting the container but after the image creation. So the second image will also have an entrypoint script. So, do I just call the base image's entrypoint script from the second image's entrypoint script? Or is there a more elegant solution?
Thanks in advance
|
extend docker image preserving its entrypoint
|
the way I've solved it was:Open SSMS, and, on Server Name, write down (local) , and press connect .
This happens because when you do a default installation of SQL Server, to connect to that instance you just need to specify . (dot) OR (local) as the server name.all credits go toHackerman.ShareFollowansweredJun 24, 2017 at 7:53Knocktorius MaximaKnocktorius Maxima37122 gold badges66 silver badges2020 bronze badgesAdd a comment|
|
here is theproblemIm facing:This happens when I try toaccess an instance in SSMS.It started by installingSQL Server 2016 Enterprise With Service Pack 1 64-bit.Than, installed SSMS to create a database in it, as normal.Didn't reach this point yet because simply can't connect to the instance.Been through a really long process to make sure everything was ok:See if MS SQL Server is started.See if Firewall is allowing port 1433.See if TCP/IP protocol is enabled for MS SQL protocols.Make sure the database engine is configured to accept remote connections.make sure you are using an instance name in your connection strings. ( Usually the format needed to specify the database server is machinename\instancename )Make sure the login account has access permission on the database you used during login.Can't seem to find the problem, any help here?
|
Connect to instance in SSMS
|
I got the answer from the docker contributor Brian Goff:docker run -d --name mydb postgres
docker run --rm --link mydb:db myrailsapp rake db:migrate
docker run -d --name myapp --link mydb:db myrailsappThis is going to fire up postgres.
Fire up a container which does the db migration and immediately exits and removes itself.
Fires up the rails app.Think of the build process like compiling an application. You don't seed data into a database as part of the compilation phase.
|
I linked my app container to postgres onrun:docker run --link postgres:postgres someproject/developand it worked fine.But I realized that I need to install some stuff to database with django command beforerun. So I need linking whilebuild.How can I do that?docker build -hdoesn't have--linkoption.
|
How to link docker containers on build?
|
If you know the id of the user, you can try:Audited::Adapters::ActiveRecord::Audit.where(auditable_type: 'User', auditable_id: user_id)For specific actions like create, update, destroy, you can try their scopes - creates, updates, destroys. I found iton their github repo.
|
I am using theAudited Gemwith my project but I don't understand how to get the audit trail for a deleted object. Their example shows:user = User.create!(name: "Steve")
user.audits.count # => 1
user.update_attributes!(name: "Ryan")
user.audits.count # => 2
user.destroy
user.audits.count # => 3but if all I know is that a user is missing, how can I access theauditsince I need access to the object that gets audited?
|
Rails 4 + audited: Get audits for deleted object
|
Unlike the data you store in Firestore or Storage, the user profiles in Authentication are fully managed by Firebase. I believe they're quite well globally replicated, but the point is that they're not your/my concern.
If you do want to create your own back up of the user data, you can do so through the auth:export command of the CLI or through the Admin SDKs.
|
With a Firebase project using GCP resources in a single Region (not dual/multi region), are Firebase Auth Users also only stored somehow in that region and would be lost in case of a disaster in that region?
I am backing up Firestore data (that contains additional information for accounts) as well as Storage data to Storage buckets in another region.
But I am wondering whether the Firebase Auth Accounts itself (I mean the data from the "Authentication" tab in Firebase Console, e.g. Auth Provider, UID, password-Hash parameters for each user) would be lost in case of a disaster? Let's say a fire destroys the GCP region completely the project has set as default GCP location - I can then of course restore the Firestore and Storage data but will all accounts (="logins") be lost or are they anyway always backed up/replicated across regions by Google.
|
Are Firebase Auth User accounts lost if using a single GCP Region as default location in case of a disaster in that region?
|
http://alestic.com/2009/04/ubuntu-ec2-sudo-ssh-rsync describes all the options available to you, and includes instructions for enabling SSH to root on EC2:
ssh -i KEYPAIR.pem ubuntu@HOSTNAME 'sudo cp /home/ubuntu/.ssh/authorized_keys /root/.ssh/'
|
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I am trying to SSH into my server via WinSCP, although the problem will occur with putty as well.
I have Ubuntu 12.04 I have edited /etc/ssh/sshd_config and added PermitRootLogin without-password to the bottom of the file. But this still doesn't seem to have changed my problem.
People have mentioned needed to restart ssh demon. I have tried:
/etc/init.d/sshd reload
reload sshd.service
/etc/init.d/sshd reload
All of the above are unrecognised.
I have then tried Files ¬ Custom Commands ¬ sudo -s & su No luck there either.
|
Can't SSH as root into EC2 server - Please login as the user "ubuntu" rather than the user "root" [closed]
|
As mentioned in "Git XCode - change origin", you simply can change the remote origin url, using git remote set-url (or in your case, rename+add).
git remote rename origin upstream
git remote add origin /url/of/private/repo
(with the XCode GUI, you could remove, then add again, the remote origin)
If that private repo is empty, you can push to the full history of your cloned repo.
By renaming 'origin' to 'upstream', you keep the possibility to fetch from the original repo, while pushing to your new origin target repo.
|
I cloned an abandoned repository from Github, and now I want to be able to upload my changes to a private repo so that a few other people can work on the changes with me. Unfortunately, since I cloned it instead of making a fork, so Xcode is trying to make the commits to the original repo. Is there a way to change what repo the commits are being made to? If there is, would there be a way to change it to a repo on another website (Bit Bucket)?
I fully intend to make the repo public once the changes are complete.
|
Change Git Repository Of Existing Xcode Project
|
It appears that msbuild is not available in microsoft/dotnet-framework-build image.
I suspect (!) that this image contains the dotnet binary but not msbuild. One alternative is to find an image that includes it. Another option is to add it to the microsoft/dotnet-framework-build.
You are able to access msbuild from your local machine because it's installed there. When you run docker build, the Dockerfile is executed within the operating system defined by the image's FROM statements.
HTH!
|
whenever I run docker build I'm getting:
'msbuild' is not recognized as an internal or external command,
operable program or batch file.
and
'nuget.exe' is not recognized as an internal or external command,
operable program or batch file.
However, when I run msbuild or nuget restore from CMD it works fine on it's own. I've already added paths to System Variables / Path
|
Not recognized command when running Docker
|
GitHub only exposes the way to show diff between two commits.Provided those tags actually point to commits, the Url format would be something likehttps://github.com/{user}/{repository}/compare/{from-tag}...{until-tag}As an example,https://github.com/libgit2/libgit2sharp/compare/v0.9.0...v0.9.5shows the diff between two versions of the LibGit2Sharp project. This diff includes all the modified files.If you want to retrieve a URL that targets a specific file:Switch to theFiles ChangedtabClick on theShow Diff Statsbutton (This will display the list of modified files as links)Copy to the clipboard the link of the specific file you're after... and Tada! You're done.For instance, given the diff above, the linkhttps://github.com/libgit2/libgit2sharp/compare/v0.9.0...v0.9.5#diff-11will point to theLazyFixtures.cschanges that occured between version v0.9.0 and v0.9.5.UpdateFollowing your comment which states that your diff is too big to be rendered through the Web interface, how about reverting to good old command line tooling? You could redirect the output of the diff to a file and then send the file as an email attachment.$ git diff v0.9.0 v0.9.5 -- LibGit2Sharp.Tests/LazyFixture.cs > /tmp/lazyfixture.diff
|
I need to generate a diff for a single file that will show the differences between two versions, which are actually tags in github. I then want to send this diff to someone via email so a github URL for the diff would be ideal. The github compare view will allow me to do this for all changed files, but that's no good as there are thousands of files in my repo.I can do this in the command line as follows, but this doesn't help as I need to send the diff to someone via email:git diff tag1 tag2 -- path/to/fileI found the command line version discussed here:how can I see the differences in a designated file between a local branch and a remote branch?
|
How can I generate a diff for a single file between two branches in github
|
http://help.github.com/create-a-repo/
initialize your local folder as a git repo:
git init
stage your local files in the repo
git add .
commit your code to the repo
git commit -m 'comment'
tell git about your remote repo
git remote add origin //your github conneciton here
push your local master branch to the "origin" remote repo
git push origin master
|
I have a Xcode 4 project, and when I opened it in the first time I did checked the "create a local respo for the project...". I also have a repo in GitHub. How to upload the files from my computer to the repo in github?
Thanks.
|
Uploading Files to Github from a Local Repo
|
0
I've tried to use more secure ssh forwarding instead of copying the private key into the machine but found that git clone doesn't work properly this way and relays on ~/.ssh/id_rsa key.
Thus your approach seems to be reasonable.
Share
Improve this answer
Follow
answered May 23, 2021 at 0:30
Roman ShishkinRoman Shishkin
2,3152323 silver badges2424 bronze badges
Add a comment
|
|
I am building an image with packer where I use git clone to get a private repository via ssh.
I set a public key on github (deploy key), and the private key inside of the instance running packer on path .ssh/id_rsa.
I also added the github public key to the known_hosts to avoid warnings.
Basically, I have a provisioner script that sets the id_rsa on the beginning and then I remove it right after running the git clone command:
sudo cp id_rsa ~/.ssh/id_rsa
ssh-keyscan github.com >> .ssh/known_hosts
...
git clone ....
sudo rm ~/.ssh/id_rsa
I don't have the private key hardcoded on the id_rsa file, I am using a github secret key.
Is this a good practice and the only way of doing it?
|
Packer and git clone private repository
|
Give it the name of the branch.
https://github.com/github/linguist/compare/c3a414e..master
You can do it manually, or use the base and compare drop downs.
In general, commit IDs, branch names, and tags are interchangeable. They are all "revisions" which specify a commit. See gitrevisions for the ways you can identify commits. For example, you can compare against where master was two years ago.
https://github.com/github/linguist/compare/c3a414e..master@{2 years ago}
head did not work because the names are case-sensitive. It is HEAD. HEAD is a special reference. On your local repository HEAD is the currently checked out commit. On Github it will be the tip of the default branch on Github, probably master. If you want master you're better off asking for gitrevisions0.
|
On GitHub, there's a way to do a "diff" between 2 commits.
https://help.github.com/en/github/committing-changes-to-your-project/comparing-commits
In a nutshell, it looks like this:
https://github.com/github/linguist/compare/c3a414e..faf7c6f
If I wanted to compare between a certain commit in history vs. the current head of the branch, how would I do this? I don't want to always have to always look up the 7-character SHA code of the latest commit.
I've tried
https://github.com/github/linguist/compare/c3a414e..head but that doesn't work.
|
On GitHub, how to compare between a certain commit and the current head of a branch?
|
Expanding upon my comment,You would need to define/create acallback urlon your end, which
will need to be publicly accessible.githubwould make a hit to this url via agit hookwhenever a
push is made to the branch in question.You can add authentication for the hit in the hook, if needed.This call will inform your server that a push has been made to the
certain branch on github.Now your server should start a deployment task which does the
pull/clone.Regarding this deployment task, there are many ways to do this, and the finer details will vary depending on how you do it.One way would be to introduce a Continuous Integration tool likeJenkinsin your stack, which you could also use for regular and test builds in different environments.Another way could be to execute a simple bash script which doescd $REPO_DIR && git pull origin branchname && service apache2 etc restart.
|
Currently, my production website is hosted on Azure. I use git and push all my commits to GitHub. With the magic ofgit hooksAzure has the ability to pull from GitHub when someone pushes a certain branch to GitHub.How can I replicate this with my own staging server hosted on-premise? In other words, how can I set a repo on GitHub, and when I push to it, through git hooks send a signal, request or what-have-you to perform an automatic pull on my on-premise server?I know Git is not a deployment software, but if I have to write a mechanism on my on-prmise server to make this happen I would like to know where to start. If it's helpful, we use Microsoft technology, so we are running our staging server on Windows Server, while our production is on Azure.I understand that I'll need to use callback url on my server to then perform whatever is needed.I would like to know what methods people use to accomplish this. e.g.: on my call back url, how would I call a script to run a pull/fetch/clone bash command. or other method.If you need more information, feel free to ask.
|
GitHub - setup auto deployment with remote server
|
It turns out it was because of server overloading due to another user on the shared server, so nothing to do with my code or configuration. Thanks for the help anyway!
|
I'm building a database (Postgresql) driven site using Flask on Webfaction and I'm getting some strange 404 errors. Here's what happens: after clicking through 4-5 pages on the site, there is usually a 404 error. Reloading the page (either Ctrl-R, selecting the URL and pressing Enter or clicking the refresh icon) makes the error go away and the page displays correctly. After visiting another 4-5 pages, the same problem occurs. Suprisingly enough, it is not always the same pages giving the 404.
I'd like to have people's opinions on what could be causing these intermittent errors...
Caching?
Unhandled database connection errors?
Other types of unhandled exceptions?
Background info (feel free to ask for more):
Flask on Python 2.7
Flask-Bootstrap
Hosted on Webfaction
Here are the headers from a successful request (after reloadign after getting a 404):
Response headers
HTTP/1.1 200 OK
Server: nginx
Date: Sun, 26 Jan 2014 11:46:49 GMT
Content-Type: text/html; charset=utf-8
Transfer-Encoding: chunked
Connection: keep-alive
Vary: Accept-Encoding
Content-Encoding: gzip
Request Headers
GET /product/333947 HTTP/1.1
Host: [mysubdomain].webfactional.com
User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux i686; rv:24.0) Gecko/20100101 Firefox/24.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
DNT: 1
Connection: keep-alive
|
What are some possible sources for an intermittent 404 error in Flask?
|
You can change it in settings. Just decrease memory usage by the slider. Go to settings and choose the Advanced tab.
other settings:
https://docs.docker.com/docker-for-windows/#docker-settings-dialog
|
When I start docker for windows memory usage increases by almost 25% of 6 GB (that's 1.5 GB) without even running a container. I can't see the docker process that in the task manager, but I figured the memory usage by looking at the memory usages % before and after running the docker for windows program.
I'm running windows 10. How can I prevent docker from eating up all this ram.
|
Starting Docker for windows takes so much ram even without running a container How to prevent it?
|
CUDA now supports printfs directly in the kernel. For formal description see Appendix B.16 of the CUDA C Programming Guide.
|
I am currently writing a matrix multiplication on a GPU and would like to debug my code, but since I can not use printf inside a device function, is there something else I can do to see what is going on inside that function. This my current function:
__global__ void MatrixMulKernel(Matrix Ad, Matrix Bd, Matrix Xd){
int tx = threadIdx.x;
int ty = threadIdx.y;
int bx = blockIdx.x;
int by = blockIdx.y;
float sum = 0;
for( int k = 0; k < Ad.width ; ++k){
float Melement = Ad.elements[ty * Ad.width + k];
float Nelement = Bd.elements[k * Bd.width + tx];
sum += Melement * Nelement;
}
Xd.elements[ty * Xd.width + tx] = sum;
}
I would love to know if Ad and Bd is what I think it is, and see if that function is actually being called.
|
printf inside CUDA __global__ function
|
You should re-build the container. Do you have the Dockerfile? if yes, you can modify it not only to add your service, you'll need to set an ENTRYPOINT to launch postfix while CMD passed as argument will launch gitlab.But as somebody said in comments, this is a dirty solution. It should be separated containers.Another "dirty" solution could be this:https://docs.docker.com/config/containers/multi-service_container/Using the stuff of this last link (supervisord) you can use a wrapper to launch two ore more services inside the same container.ShareFolloweditedNov 14, 2018 at 15:46answeredNov 14, 2018 at 15:36OscarAkaElvisOscarAkaElvis5,53244 gold badges3030 silver badges5454 bronze badges3If you specify an ENTRYPOINT, only that gets run, and it gets passed the CMD as arguments. You can’t use separate ENTRYPOINT and CMD to start two things in one container (without doing the heavy lifting in the ENTRYPOINT).–David MazeNov 14, 2018 at 15:43It can be done. It's a bad practice but it can be done. You can start some stuff on ENTRYPOINT and then run the CMD. Try it you can set ENTRYPOINT and then pass arguments as you said to run CMD stuff.–OscarAkaElvisNov 14, 2018 at 15:44I found another "dirty" solution, I'll edit my answer.–OscarAkaElvisNov 14, 2018 at 15:47Add a comment|
|
I have set up GitLab on docker container (from gitlab/gitlab-ce).
didapt-get install postfixinside container.Now when I restart container, postfix is not started (through in/etc/rc2.d/there is S01postfix link).Question: how do I start services in container (like postfix) whendocker container (re)starts?
|
Gitlab Docker Postfix start on "boot"
|
Since my comments solved the problem and someone else might stumble above the same system dependencies I copied my comments into an answer:This is a problem in your system, not with github. Try to use git-scm.com/download/win original git for windows software. I prefer using ssh connections (git:// URLs) with github.Git and github should work with any git solution you use. Your problem is not git but the implementation of https in your git client and possibly some wrong CA files and certificates. Also checkgit config.BTW: There are so many git clones and frontends out there, but in the end, the git-scm.com implementation, or the basic git implementation is all you need and in 99% of the cases it solves your problems much better than all these shiny frontends out there, in the wild.
|
I'm getting an error when I'm trying to push changes to my repo . It worked fine until 2-3 days ago , Something happened suddenly.unable to access 'https://github.com/meetmangukiya/meetmangukiya.github.io/': error setting certificate verify locations:
CAfile: C:\Users\admin\AppData\Local\GitHub\PortableGit_25d850739bc178b2eb13c3e2a9faafea2f9143c0\mingw32/usr/ssl/certs/ca-bundle.crt
CApath: none
|
Not able to push changes to github
|
This one is simple - you don't have aspec.jwtRules.audiencesin your values file!jwtRulescontains an array, so you'll have touse some indexor iterate over it. Also, i don't think that neither your indentation, nor using of|-for audiences is correct, perdocsit should be an array of strings.So i came up with this example (your values are unchanged):apiVersion: networking.istio.io/v1alpha2
kind: RequestAuthentication
metadata:
name: name01
spec:
selector:
matchLabels:
app: {{ .Values.spec.selector.matchLabels.app }}
jwtRules:
- issuer: foo
jwksUri: bar
forwardOriginalToken: true
audiences:
{{- with (first .Values.spec.jwtRules) }}
{{- range .audiences }}
- {{ . | title | quote -}}
{{- end }}
{{- end }}renders into:apiVersion: networking.istio.io/v1alpha2
kind: RequestAuthentication
metadata:
name: name01
spec:
selector:
matchLabels:
app: app-label
jwtRules:
- issuer: foo
jwksUri: bar
forwardOriginalToken: true
audiences:
- "User1"
- "User2"In this case it uses a first element of array
|
I am rather new to helm, and I am trying to create a chart, but running into values not transforming from the values.yaml file into my generated chart.here are my values.yamlapiVersion: security.istio.io/v1alpha2
kind: RequestAuthentication
metadata:
name: name01
namespace: ns-01
spec:
selector:
matchLabels:
app: app-label
jwtRules:
- issuer: foo
jwksUri: bar
forwardOriginalToken: true
audiences:
- user1
- user2then with my helm template:apiVersion: networking.istio.io/v1alpha2
kind: RequestAuthentication
metadata:
name: name01
spec:
selector:
matchLabels:
app: {{ .Values.spec.selector.matchLabels.app }}
jwtRules:
- issuer: foo
jwksUri: bar
forwardOriginalToken: true
audiences: |-
{{- range .Values.spec.jwtRules.audiences }}
- {{ . | title | quote }}
{{ end }}
---I also have a helpers file._helpers.tpl{{/* vim: set filetype=mustache: */}}
{{- define "jwtRules.audiences" -}}
{{- range $.Values.spec.jwtRules.audiences }}
audiences:
- {{ . | quote }}
{{- end }}
{{- end }}the error its producing:at <.Values.spec.jwtRules.audiences>: can't evaluate field audiences in type interface {}
|
helm helpers file can't evaluate field type interface array/string
|
I solved this by changing the configuration for the Nginx Ingress as following:data:
client-max-body-size: 50M
keep-alive: "3600"
proxy-buffer-size: 500m
proxy-buffers-number: "8"Glad if this is time-saving for anyone.
|
We have the page which has some of the larger Javascript files. When we hit the page, all the small files get downloaded. However, one of the large files was not downloaded fully and failed withnet::ERR_HTTP2_PROTOCOL_ERRORmost of the time. We need to open the page using only a VPN connection as it does not open to all.Just to add, the Nginx ingress controller is used with the following settings for that ingress:nginx.ingress.kubernetes.io/configuration-snippet: |
gzip on;
gzip_types text/plain text/css image/png application/javascript;
if ($request_uri ~* \.(js|css|gif|jpeg|png)) {
expires 1M;
add_header Cache-Control "public";
}
nginx.ingress.kubernetes.io/http2-push-preload: "false"
nginx.ingress.kubernetes.io/proxy-body-size: 500M
nginx.ingress.kubernetes.io/proxy-bufferings: "off"
nginx.ingress.kubernetes.io/proxy-connect-timeout: "36000"
nginx.ingress.kubernetes.io/proxy-max-temp-file-size: "0"
nginx.ingress.kubernetes.io/proxy-read-timeout: "36000"
nginx.ingress.kubernetes.io/proxy-send-timeout: "36000"
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/secure-backends: "true"
nginx.ingress.kubernetes.io/ssl-redirect: "true"Can we set another annotation in the Nginx ingress or this might be an issue from VPN? I wonder how can we resolve this issue.
|
Page is not loading a file fully and net::ERR_HTTP2_PROTOCOL_ERROR is shown
|
In my case the encoding was wrong.appspec.ymlshould be saved asUTF-8and notUTF-8 BOM.BTW: The encoding can be changed in VS 2017 usingFile > Save as.., then the down arrow at theSave-Button ...Save with encoding...ShareFolloweditedDec 7, 2017 at 11:36answeredDec 7, 2017 at 11:27H6_H6_32k1212 gold badges8181 silver badges8484 bronze badges0Add a comment|
|
I am deploying an application using AWS code deploy to Windows environment. I use an apspec.yml yaml file. When I deploy the application I get following errorThe deployment failed because an invalid version value () was entered in the application specification file. Make sure your AppSpec file specifies "0.0" as the version, and then try again.It seems like there is a problem with encoding or line ending. All the materials in the internet are for linux but not for windows. I use visual studio editor to edit this file. How to fix this issue?
|
AWS CodeDeploy ymal file error
|
Based on your comments below, you may try this one:
FROM prismagraphql/prisma:1.34.8
RUN apk update && apk add build-base dumb-init curl
RUN curl -LJO https://raw.githubusercontent.com/vishnubob/wait-for-it/master/wait-for-it.sh
RUN cp wait-for-it.sh /app/
RUN chmod +x /wait-for-it.sh
ENTRYPOINT ["/bin/sh","-c","/wait-for-it.sh mysql:3306 --timeout=0 -- /app/start.sh"]
Note: You need to use cp command as you want to copy the script from one location to another within your container's filesystem.
You can also confirm the presence of your script and other files/dirs in the /app folder by running the command:
$ docker run --rm --entrypoint ls waitforit -l /app/
total 36
drwxr-xr-x 1 root root 4096 Aug 29 2019 bin
drwxr-xr-x 2 root root 16384 Aug 29 2019 lib
-rwxr-xr-x 1 root root 462 Aug 29 2019 prerun_hook.sh
-rwxr-xr-x 1 root root 61 Aug 29 2019 start.sh
-rw-r--r-- 1 root root 5224 Apr 22 13:46 wait-for-it.sh
|
in this example, I copy wait-for-it.sh inside /app/wait-for-it.sh
But, I don't want to save wait-for-it.sh in my local directory. I want to download it using curl and then copy into /app/wait-for-it.sh
FROM prismagraphql/prisma:1.34.8
COPY ./wait-for-it.sh /app/wait-for-it.sh
RUN chmod +x /app/wait-for-it.sh
ENTRYPOINT ["/bin/sh","-c","/app/wait-for-it.sh mysql:3306 --timeout=0 -- /app/start.sh"]
What I have tried is this, but how can I get the wait-for-it.sh after downloading the file using curl command:
FROM prismagraphql/prisma:1.34.8
FROM node:11-slim
RUN apt-get update && apt-get install -yq build-essential dumb-init
RUN curl -LJO https://raw.githubusercontent.com/vishnubob/wait-for-it/master/wait-for-it.sh
COPY wait-for-it.sh /app/wait-for-it.sh
RUN chmod +x /wait-for-it.sh
ENTRYPOINT ["/bin/sh","-c","/wait-for-it.sh mysql:3306 --timeout=0 -- /app/start.sh"]
|
Dockerfile: how to Download a file using curl and copy into the container
|
Ok, I assume you have your local git already created, if not you will have to do this on the terminal in the directory of your project:
git init
git add .
git commit -m "Initial commit"
Next in your github account create a new repo:
An image of how to create a repo on GitHub
Then you go to de button that says "clone or download" and copy the link you wish (https or SSH) something like this https://github.com/YourUsername/YourRepo.git
And by the end on the console you do this:
git remote add origin https://github.com/YourUsername/YourRepo.git
git push -u origin master
|
I am just getting used to GitHub from the instructions I got as a beginner, and got stuck at the step below. I am wondering how to get the name of the local repo to be able to create remote repo with same name. So far, I have run: a) git init b) git add readme, c)git commit -m "first". In my directory, I see a .git directory, but don't know the name of the local repo.Thank you.
Create a remote repository on GitHub that has the same name as
your local repository.
Add the remote repository (origin) URL to local repository.
Push local repostiory to GitHub.
Create a local branch, create/add/commit a new file.
Merge new local branch commit(s) into local master.
Push updated master branch to GitHub.
|
How to create a remote repository on GitHub that has the same name as local repository
|
You need to use update-function-code, not update-function-configuration.
Use the --image-uri option, and note that Lambda references image versions via their SHA, not the tag.
|
My intension is to deploy a new container version to my AWS lambda.
Lambda now offers docker run time and I have successfully updated the lambda docker container from the web console but not able to do so from the cli.
There is an update-function
https://docs.aws.amazon.com/cli/latest/reference/lambda/update-function-configuration.html
But it does not show how I can update the container image version.
Is it possible to do update the container version via the aws cli?
|
How to update AWS lambda docker container version?
|
There is no "official tool" to do this. It could be done by iterating through the existing parameters and creating them in the target.I found this tool that somebody has written:aws-ssm-copy · PyPI: Copy parameters from a AWS parameter store to anotherIt looks like it can copy between Regions and between AWS Accounts (by providing multiple Profiles).
|
Consider that I have got a AWS account that already has some parameter store data.Is there a way to migrate these data from this parameter store to another:parameter store?region?AWS account?I would prefer official tools to do this, but tools similar to dynamoDB dump are also welcome.
|
How to migrate parameter store data to other Region / AWS Account
|
You can do below to disable
dism.exe /Online /Disable-Feature:Microsoft-Hyper-V
bcdedit /set hypervisorlaunchtype off
and below to enable
dism.exe /Online /Enable-Feature:Microsoft-Hyper-V /All
bcdedit /set hypervisorlaunchtype auto
From PowerShell
To Disable
Disable-WindowsOptionalFeature -Online -FeatureName Microsoft-Hyper-V-All
To Enable
Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Hyper-V –All
PS: Source threads
https://superuser.com/questions/540055/convenient-way-to-enable-disable-hyper-v-in-windows-8
How to disable Hyper-V in command line?
|
I have asked something similar before, but I was wondering if someone could give me some very simple instructions for how I can turn off HyperV Container features so that I can use Virtual Box and then turn them back on to use Docker for Windows
At present I have the following message from Docker for Windows
"Hyper-V and Containers features are not enabled.
Do you want to enable them for Docker to be able to work properly?
Your computer will restart automatically.
Note: VirtualBox will no longer work."
I do NOT need both at the same time
I really need clear instructions as I do not want to be in a position where I get docker working then can never use Virtual Box again!
I have a requirement for using my existing Virtual Box VMs every now and then and I cannot be in a position where I cannot use them
Paul
|
Simple instructions needed for enabling and disabling Hyper V Docker
|
Prom-Client is just that client to send stats from, not the Prometheus server. To access the data you need to access the server, not the client endpoints. Sorry for the question.
|
I am trying to use the/graphendpoint for thePrometheusexpression browser, but I am not sure how to configure it. I have/metricworking, but since I don't have an endpoint for/graphI am trying to find how to set it up. I thought it was built intoPrometheusbut haven't found examples on how to use it withnode.js.
|
Using expression browser /graph with prom-client?
|
It is possible as of November 2020:ChooseEdit domain.To add aCustom endpoint, select theEnable custom endpointcheck box.ForCustom hostname, enter your preferred custom endpoint hostname. Your custom endpoint hostname should be a fully qualified domain name (FQDN), such aswww.yourdomain.comor example.yourdomain.com.ForAWS certificate, choose the SSL certificate that you want to use for your domain.https://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-customendpoint.html
|
I tried creating a Route 53 alias record but that didn't work.
|
Is there anyway to create a friendly URL for AWS Elasticsearch domain url?
|
Can you add DependsOn for the EC2 creation till EIP is created. Having a Ref to EIP doesnt guarantee that the instance will wait till EIP is created.ShareFollowansweredNov 30, 2016 at 6:10Nitin ABNitin AB50811 gold badge55 silver badges1212 bronze badges11Good thought. I made a few adjustments such that the elastic ip is created first, the server second, and then an IP association ("AWS::EC2::EIPAssociation") third (using DependsOn). This fixed the issue. Interestingly, looks like I could use the NetworkInterface / AssociatePublicIpAddress property in the CFN script to have this happen automatically. I've not tested that yet but probably will tomorrow. Thanks for the help!–SamNov 30, 2016 at 6:30Add a comment|
|
I have a simple cloudformation script that builds a Server ("AWS::EC2::Instance") and an Elastic IP ("AWS::EC2::EIP") which it attaches to that server.The subnet has an igw attached.I also have UserData defined within the Properties of the Server. The problem is that until the EIP attaches to the Server, there is no internet connectivity. Since this is an internet-facing subnet and I don't have a NAT box/gateway configured, is there a best practice for delaying UserData until the EIP attaches?There is a dependency issue here: Server is created, EIP is created and attach to server ("InstanceId":{"Ref":"Server"}), so I don't believe I can DependsOn with the EIP.
|
Cloudformation UserData with Elastic IP
|
Examples from Building and testing PowerShell rather use shell: pwsh (might be a synonym for powershell)
See for instance:
lint-with-PSScriptAnalyzer:
name: Install and run PSScriptAnalyzer
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Install PSScriptAnalyzer module
shell: pwsh
run: |
Set-PSRepository PSGallery -InstallationPolicy Trusted
Install-Module PSScriptAnalyzer -ErrorAction Stop
- name: Lint with PSScriptAnalyzer
shell: pwsh
run: |
Invoke-ScriptAnalyzer -Path *.ps1 -Recurse -Outvariable issues
$errors = $issues.Where({$_.Severity -eq 'Error'})
$warnings = $issues.Where({$_.Severity -eq 'Warning'})
if ($errors) {
Write-Error "There were $($errors.Count) errors and $($warnings.Count) warnings total." -ErrorAction Stop
} else {
Write-Output "There were $($errors.Count) errors and $($warnings.Count) warnings total."
}
Note the lack of : after if.
if: is a conditions to control job execution.
if (...) is part of a Powershell script.
|
I want to be able to save the result from any command I run so that I can decide what I want to do next in my YAML file
Here is an example of a non working example of something similar to what I want
- name: Run script
shell: powershell
run: |
status = script\\outputZero.ps1
if: status == 0
echo "output was 0"
I also tried doing this
- name: Run script
shell: powershell
run: |
if: script\\outputZero.ps1
echo "output was 0"
but it gave me the error The term
'if:' is not recognized as the name of a cmdlet, function, script file, or operable program.
|
How to save command result in GitHub
|
0
When running just docker-compose up, the CTRL+C command always stops all running services in the current compose scope. It doesn't care about depends_on.
You would need to spin it up with detach option -d, like
docker-compose up -d producer
Then you can do
docker stop producer
And db service should still be running.
Share
Improve this answer
Follow
answered May 12, 2021 at 17:30
nulldroidnulldroid
1,18088 silver badges1616 bronze badges
3
1
I'm not running docker-compose up, I always run docker-compose up service. So if I just "upped" the producer service, CTRL+C should only stop the producer service. But it also stops all the services that producer depends on. I think that's confusing. Detached is not an option because I do want to see the output on another shell.
– empz
May 13, 2021 at 9:54
You're right, it is somehow confusing. I don't think there's a way, to achieve exactly what you want to do. As I said, CTRL+C stops all the running services in your compose file, not just the services your upped service depends on.
– nulldroid
May 15, 2021 at 9:25
alternatively, for seeing the output: you can detach and after that docker logs producer -f (to follow logs again)
– nulldroid
May 28, 2021 at 8:52
Add a comment
|
|
Given the following Docker Compose file....
version: '3.8'
services:
producer:
image: producer
container_name: producer
depends_on: [db]
build:
context: ./producer
dockerfile: ./Dockerfile
db:
image: some-db-image
container_name: db
When I do docker-compose up producer obviously the db service gets started too. When I CTRL+C both services are stopped. This is expected and fine.
But sometimes, the db service is started before, on a different shell and so doing docker-compose up producer understands that db is running and only starts producer. But when I hit CTRL+C, both producer and docker-compose up producer0 are stopped even though docker-compose up producer1 was not started as part of this docker-compose up producer2 command.
Is there a way to avoid getting the dependencies services stopped when stopping its "parent" ?
|
How to avoid service dependencies from being stopped in Docker Compose?
|
Make sure the root directory for your php source file is: /usr/share/nginx/html / else, modify the fastcgi_pass ..
This is a working configuration I have:
location /media {
if (-f $request_filename) {
# filename exists, so serve it
break;
}
if (-d $request_filename) {
# directory exists, so service it
break;
}
rewrite ^/(.*)$ /media/index.php?$1;
}
It will redirect all requests that do not exist and would normally return a 404 error to the index.php
|
Every file is passed to "index.php", but every php file isn't properly redirected because of the fastcgi. Any workaround ?
location / {
if ($request_filename ~* "index.php") {
break;
}
rewrite ^/(.*)$ /index.php?page=$1 last;
break;
}
location ~* \.php$ {
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME /usr/share/nginx/html$fastcgi_script_name;
fastcgi_pass 127.0.0.1:9000;
}
Thanks
|
Nginx does not redirect php files
|
URIs like /sports/ are actually routed to /index.php with a parameter containing the value of $request_uri. Within nginx these are all processed by the .php location block, and use the value of the expires directive within that block and that block alone.
One possible solution is to make the value of the expires directive a variable:
location ~ \.php$ {
expires $expires;
...
}
And create a map of values dependent on the original request URI ($request_uri):
/index.php0
Note that the /index.php1 directive lives in the /index.php2 block or at the same level as the /index.php3 block.
See this and this for details.
|
I am running wordpress on Nginx platform and have set expires header on .php and static assets separately. But now the requirement is to add custom expires header to certain urls in wordpress using nginx . I have tried adding in location block but seems it gets overriden by the expires header written in .php block
I have created a wordpress page named sports and want to provide the url with no expiry header and for rest of the urls expires header should be of 10 minutes
My Config for reference :
server {
listen 0.0.0.0:80; # your server's public IP address
server_name www.abc.com; # your domain name
index index.php index.html ;
root /srv/www; # absolute path to your WordPress installation
set $no_cache 0;
try_files $uri $uri/ /index.php;
location ~*^.+\.(ogg|ogv|svg|svgz|eot|otf|woff|mp4|ttf|atom|jpg|jpeg|gif|png|ico|zip|tgz|gz|rar|bz2|doc|xls|exe|ppt|tar|mid|midi|wav|bmp|css|woff|js|rtf|flv|pdf)$ {
access_log off; log_not_found off; expires 365d;
}
location / {
try_files $uri $uri/ /index.php?$args;
expires modified +10m;
}
location ~ .php$ {
try_files $uri /index.php;
include fastcgi_params;
fastcgi_pass 127.0.0.1:9000;
set $no_cache 0;
add_header Cache-Control public;
expires modified +10m;
}
location ~* /sports
{
expires -1;
}
}
|
How to override expires header for certain urls in wordpress running on Nginx
|
add at the top:RewriteRule ^folder/ - [L,NC]
|
The following rules are in an htaccess file and need to remain:# GENERAL
RewriteRule ^([A-Za-z_0-9\-]+)$ /index.php?page=$1 [QSA]
RewriteRule ^([A-Za-z_0-9\-]+)/$ /index.php?page=$1 [QSA]
RewriteRule ^([A-Za-z_0-9\-]+)/([a-z]+)$ /index.php?page=$1&comp=$2 [QSA]
RewriteRule ^([A-Za-z_0-9\-]+)/([a-z]+)/$ /index.php?page=$1&comp=$2 [QSA]However, I need to prevent a specific folder from redirecting, lets call it /folder/I can't seem to get it to work correctly and hope someone can help.THanks!!!
|
Prevent folder redirect in htaccess
|
The bug report for this issue ishereThe underlying cause is that the AWS cli shipped a breaking change in a minor version release. You can see thishereI'm assuming here you're using thepulumi-ekspackage in order to provision an EKS cluster greater thanv1.22. The EKS package uses a resource provider to configure some EKS resources like theaws-authconfig map, and this isn't the same transient kubeconfig you're referring to in~/.kube/configIn order to fix this, you need to do the following:Ensure youraws-cliversion is greater than1.24.0or2.7.0Ensure you've updated yourpulumi-ekspackage in your language SDK package manager to greater than0.40.0. This will mean also updated the provider in your existing stack.Ensure you have the version ofkubectlinstalled locally that matches your cluster version that has been provisioned
|
I get the following error message whenever I run a pulumi command. I verified and my kubeconfig file isapiVersion: v1I updatedclient.authentication.k8s.io/v1alpha1toclient.authentication.k8s.io/v1beta1and still have the issue, what could be the reason for this error message?Kubeconfig user entry is using deprecated API version client.authentication.k8s.io/v1alpha1. Run 'aws eks update-kubeconfig' to update.
|
is there a way to solve " Kubeconfig user entry is using deprecated API version client.authentication.k8s.io/v1alpha1 " with pulumi
|
So it seems like , when you use "gcloud preview app deploy" command it deploys to google cloud compute engine where the app is runing on port 8080.To have a static IP to you project here are the steps to take:1) In your code , create an app.yaml file. Forward port 80 to port 8080 (where your app is listening)network:
forwarded_ports:
- 80:80802) go head and deploy you appgcloud preview app deploy3) In your google console go to NETWORKING > FIREWALL RULES and add new firewall rule fortcp:804) Go onto EXTERNAL IP ADDRESSES and change your apps IP address to static.You will see your site runing on the external ip adress.
|
I have a nodejs app in google compute engine which I can access with the given appspot adress.In networking I set the ip adress as static.
I have added a firewall rule for allow any trafic , tcp:8080.But when I try to go onto external ip adress on my browser it fails to load. So I cannot acces my site with external ip adress.What should I do to be able to use external IP adress?
|
Google compute engine external ip
|
1
Found out that I needed to include the docker socket in the gitlab-runner configuration as well, and not only have it available in the container.
By adding --docker-volumes '/var/run/docker.sock:/var/run/docker.sock' and removing DOCKER_HOST=tcp://docker:2375 I was able to connect to docker on my host system and spawn sibling containers.
Share
Follow
answered Dec 16, 2018 at 18:36
RobertoRoberto
19911 gold badge66 silver badges1616 bronze badges
2
I'm faced something similar. I want to deployed using shell executor. where I suposse I could add --docker-volumes?. on config.toml? How?
– Darwin
Mar 16, 2019 at 4:25
Hi @Darwin. The docker volumes need to be on the docker-compose file if you are using it, otherwise on the docker run command. But also in the config.toml as volumes = ["/var/run/docker.sock:/var/run/docker.sock", "/cache"]
– Roberto
Mar 18, 2019 at 11:11
Add a comment
|
|
I'm currently trying to setup a gitlab ci pipeline. I've chosen to go with the Docker-in-Docker setup.
I got my ci pipeline to build and push the docker image to the registry of gitlab but I cannot seem deploy it using the following configuration:
.gitlab-ci.yml
image: docker:stable
services:
- docker:dind
stages:
- build
- deploy
variables:
DOCKER_HOST: tcp://docker:2375
DOCKER_DRIVER: overlay2
TEST_IMAGE: registry.gitlab.com/user/repo.nl:$CI_COMMIT_REF_SLUG
RELEASE_IMAGE: registry.gitlab.com/user/repo.nl:release
before_script:
- docker login -u gitlab-ci-token -p $CI_JOB_TOKEN $CI_REGISTRY
- docker info
build:
stage: build
tags:
- build
script:
- docker build --pull -t $TEST_IMAGE .
- docker push $TEST_IMAGE
only:
- branches
deploy:
stage: deploy
tags:
- deploy
script:
- docker pull $TEST_IMAGE
- docker tag $TEST_IMAGE $RELEASE_IMAGE
- docker push $RELEASE_IMAGE
- docker run -d --name "review-$CI_COMMIT_REF_SLUG" -p "80:80" $RELEASE_IMAGE
only:
- master
when: manual
When I run the deploy action I actually get the following feedback in my log, but when I go check the server there is no container running.
$ docker run -d --name "review-$CI_COMMIT_REF_SLUG" -p "80:80" $RELEASE_IMAGE
7bd109a8855e985cc751be2eaa284e78ac63a956b08ed8b03d906300a695a375
Job succeeded
I have no clue as to what I am forgetting here. Am I right to expect this method to be correct for deploying containers? What am I missing / doing wrong?
tldr: Want to deploy images into production using gitlab ci and docker-in-docker setup, job succeeds but there is no container. Goal is to have a running container on host after deployment.
|
Deploy docker container using gitlab ci docker-in-docker setup
|
Might also be helpful in some use cases:https://api.github.com/orgs/{organiszation}/eventshttps://api.github.com/users/{user}/events
|
I have been trying to find a way that I can show all of my own activity in the last week on GitHub. The activity feed of my profile only shows things that have made it to the main/master branches of repositories.Is there a way to view a weekly history for my profile that shows all repositories or branches?If no to the first question, then is there someway I could do this through the API?Looking at each branch of each repository or even just each repository is somewhat useful but I would like to see everything.I tend to write a brief status report of work and process overall as part of my job for the week on Fridays. I usually have issues remembering everything so things do get missed. I mostly get by with looking at what I was supposed to get done. This works fine but I do a lot of changes that do not end up on that list. Sometimes these things are related to code outside my organization that I had to create issues on or submit PRs to that benefit the organization and should at least end up on the list.
|
How do I get my weekly activity history on GitHub for all repositories and branches as well as issues created, closed or otherwise participated in?
|
One way of filling the branch delay slot would be:addiu $2, $2, 4 # We'll now iterate over [$2+4, $10] instead of [$2, $10[
LOOP: lw $1, 96 ($2)
addi $1, $1, 1
sw $1, 496 ($2)
bne $2, $10, LOOP
addiu $2, $2, 4 # Use the delay slot to increase $2
|
I have the following MIPS code and I am looking to rewrite/reorder the code so that I can reduce the number ofnopinstructions needed for proper pipelined execution while preserving correctness. It is assumed that the datapath neither stalls nor forwards. The problem gives two hints: it reminds us that branches and jumps are delayed and need their delay slots filled in and it hints at chaging the offset value in memory accesss instructions (lw,sw) when necessary.LOOP: lw $1, 100 ($2)
addi $1, $1, 1
sw $1, 500 ($2)
addiu $2, $2, 4
bne $2, $10, LOOPIt's quite obvious to me that this code increments the contents of one array and stores it in another array. So I'm not exactly seeing how I could possibly rearrange this code since the indices need to be calculated prior to completing the loop.My guess would be to move thelwinstruction after the branch instruction since (as far as I understand) the instruction in the delay slot is always executed. Then again, I don't quite understand this subject and I would appreciate an explination. I understand pipelining in general, but not so much delayed branching. Thanks
|
Delayed Branching in MIPS
|
Here is my conf, it can works. 502 is because it cannot find route to the upstream server(ie. change http://127.0.0.1:5000/$1 to http://localhost:5000/$1) will cause 502.
nginx.conf
http {
server {
listen 80;
server_name localhost;
location ~ ^/store/(.*)$ {
proxy_pass http://127.0.0.1:5000/$1;
}
}
}
flask app.py
#!/usr/bin/env python3
from flask import Flask
app = Flask(__name__)
@app.route('/')
def world():
return 'world'
@app.route('/<name>/<pro>')
def shop(name, pro):
return 'name: ' + name + ', prod: ' + pro
if __name__ == '__main__':
app.run(debug=True)
Update
or you can use unix socket like this, but relay on uwsgi.
nginx.conf
http {
server {
listen 80;
location /store {
rewrite /store/(.+) $1 break;
include uwsgi_params;
uwsgi_pass unix:/tmp/store.sock;
}
}
}
flask app.py
like above, not change
uwsgi config
[uwsgi]
module=app:app
plugins=python3
master=true
processes=1
socket=/tmp/store.sock
uid=nobody
gid=nobody
vaccum=true
die-on-term=true
save as config.ini, then run uwsgi config.ini
after nginx reload, you can visit your page ;-)
|
I have a Flask app with bjoern as python server. An example url I have is something like:
http://example.com/store/junihh
http://example.com/store/junihh/product-name
Where "junihh" and "product-name" are arguments that I need to pass to python.
I try to use unix socket after reading about the performance against TCP/IP calls. But now I get a 502 error on the browser.
This is an snippet of my conf:
upstream backend {
# server localhost:1234;
# server unix:/run/app_stores.sock weight=10 max_fails=3 fail_timeout=30s;
server unix:/run/app_stores.sock;
}
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name example.com www.example.com;
root /path/to/my/public;
location ~ ^/store/(.*)$ {
include /etc/nginx/conf.d/jh-proxy-pass.conf;
include /etc/nginx/conf.d/jh-custom-headers.conf;
proxy_pass http://backend/$1;
}
}
How to pass the url arguments to Flask through Nginx proxy_pass with unix socket?
Thanks for any help.
|
How to pass url arguments to Flask behind Nginx proxy_pass with unix socket
|
alloca is a non-standard compiler intrinsic whose selling point is that it compiles to extremely lightweight code, possibly even a single instruction. It basically does the operation performed at the beginning of every function with local variables - move the stack pointer register by the specified amount and return the new value. Unlike sbrk, alloca is entirely in userspace and has no way of knowing how much stack is left available.
The image of stack growing towards the heap is a useful mental model for learning the basics of memory management, but it is not really accurate on modern systems:
As cmaster explained in his answer, the stack size will be primarily limited by the limit enforced by the kernel, not by the stack literally colliding into the heap, especially on a 64-bit system.
In a multi-threaded processes, there is not one stack, but one for each thread, and they clearly cannot all grow towards the heap.
The heap itself is not contiguous. Modern malloc implementations use multiple arenas to improve concurrent performance, and offload large allocations to anonymous mmap, ensuring that free returns them to the OS. The latter are also outside the single-arena "heap" as traditionally depicted.
It is possible to imagine a version of alloca that queries this information from the OS and returns a proper error condition, but then its performance edge would be lost, quite possibly even compared to malloc (which only occasionally needs to go to the OS to grab more memory for the process, and usually works in user-space).
|
Why does alloca not check if it can allocate memory?
From man 3 alloca:
If the allocation causes stack overflow, program behavior is undefined. … There is no error indication if the stack frame cannot be extended.
Why alloca does not / can not check if it can allocate more memory?
The way I understand it alloca allocates memory on stack while (s)brk allocates memory on the heap. From https://en.wikipedia.org/wiki/Data_segment#Heap :
The heap area is managed by malloc, calloc, realloc, and free, which may use the brk and sbrk system calls to adjust its size
From man 3 alloca:
The alloca() function allocates size bytes of space in the stack frame of the caller.
And the stack and heap are growing in the converging directions, as shown in this Wikipedia graph:
(The above image is from Wikimedia Commons by Dougct released under CC BY-SA 3.0)
Now both alloca and (s)brk return a pointer to the beginning of the newly allocated memory, which implies they must both know where does the stack / heap end at the current moment. Indeed, from man 2 sbrk:
Calling sbrk() with an increment of 0 can be used to find the current location of the program break.
So, they way I understand it, checking if alloca can allocate the required memory essentially boils down to checking if there is enough space between the current end of the stack and the current end of the heap. If allocating the required memory on the stack would make the stack reach the heap, then allocation fails; otherwise, it succeeds.
So, why can't such a code be used to check if man 3 alloca0 can allocate memory?
man 3 alloca1
This is even more confusing for me since apparently man 3 alloca2 can do such checks. From man 3 alloca3:
brk() sets the end of the data segment to the value specified by addr, when that value is reasonable, the system has enough memory, and the process does not exceed its maximum data size (see setrlimit(2)).
So if man 3 alloca4 can do such checks, then why can't man 3 alloca5?
|
Why does `alloca` not check if it can allocate memory?
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.