Response
stringlengths
15
2k
Instruction
stringlengths
37
2k
Prompt
stringlengths
14
160
I figured this out between these 2 solutions that just allow prometheus and grafana to be run behind a sub-path, so nginx can just pass it through normally:For prometheus, launching it with the--web.external-url=/prometheus/flag set:https://blog.cubieserver.de/2020/configure-prometheus-on-a-sub-path-behind-reverse-proxy/For Grafana, setting the server.root_url in the config:https://grafana.com/tutorials/run-grafana-behind-a-proxy/#1
I'm setting up an AWS instance to house both my prometheus and grafana servers. I'm using NGINX to route between the 2 clients through a /location. The problem is, NGINX has to pass this value through, and the clients can't make sense of it.My NGINX config:http { log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; server_names_hash_bucket_size 128; types_hash_max_size 4096; include /etc/nginx/mime.types; default_type application/octet-stream; # Load modular configuration files from the /etc/nginx/conf.d directory. # See http://nginx.org/en/docs/ngx_core_module.html#include # for more information. include /etc/nginx/conf.d/*.conf; server { listen 80; listen [::]:80; server_name <aws instance url>; location /grafana { proxy_pass http://localhost:3000; } location /prometheus { allow <my ip>; deny all; proxy_pass http://localhost:9090; } } }So when I navigate to /grafana. It successfully takes me to grafana, but the grafana client attempts to parse the /grafana and can't find a page for it, and returns a 404.Is there a way to get rid of that, or am I going about this all wrong?
How do I stop NGINX from sending through the identifying part of the URL?
You need to runapt-get updatefirst to download the current state of the package repositories. Docker images do not include this to save space, and because they'd likely be outdated when you use it. If you are doing this in a Dockerfile, make sure to keep it as a singleRUNcommand so that caching of the layers doesn't cache an old version of the update command with a new package install request:RUN apt-get update \ && DEBIAN_FRONTEND=noninteractive apt-get install -y \ net-tools \ && apt-get clean \ && rm -rf /var/lib/apt/lists/*
I want to installnetstaton my Docker container.I looked herehttps://askubuntu.com/questions/813579/netstat-or-alternative-in-docker-ubuntu-server-16-04-containerso I'm trying to install it like this:apt-get install net-toolsHowever, I'm getting:Reading package lists... Done Building dependency tree Reading state information... Done E: Unable to locate package net-toolsSo how can I installnetstat?
Installing netstat on docker linux container
First of all the celery image is deprecated in favour of standard python image more infohere.WORKDIRsets the working directory for all the command after it is defined in the Dockerfile, which means the command which you are try to run will run from that directory. Docker image for celery sets the working directory to/home/user.Since your code is mounted on/celery_smapleand the working directory is/home/user, Celery is not able to find your python module.One alternative is to cd into the mounted directory and execute the command:celery: image: celery:3.1.25 command: "cd /celery_sample && celery worker -A my_celery -l INFO" volumes: - .:/celery_sample networks: - webnetnotice the commandAnd another one is to create your own image withWORKDIRset to/celery_sampleeg:FROM python:3.5 RUN pip install celery==3.1.25 WORKDIR /celery_sampleafter building you own image you can use the compose file by changing theimageof celery serviceEditYou need to link the services to one another in order to communicate:version: "3" services: web: build: context: . dockerfile: Dockerfile command: "python my_celery.py" ports: - "8000:8000" networks: - webnet volumes: - .:/celery_sample links: - redis redis: image: redis networks: - webnet celery: image: celery:3.1.25 command: "celery worker -A my_celery -l INFO" volumes: - .:/home/user networks: - webnet links: - redis networks: webnet:and your configuration file should be:## Broker settings. BROKER_URL = 'redis://redis:6379/0' ## Using the database to store task state and results. CELERY_RESULT_BACKEND = 'redis://redis:6379/0'once you have linked the services in compose file you can access the service by using the service name as the hostname.
I have Flask app with Celery worker and Redis and it's working normally as expected when running on local machine. Then I tried to Dockerize the application. When I trying to build/start the services ( ie, flask app, Celery, and Redis) usingsudo docker-compose upall services are running except Celery and showing an error asImportError: No module named 'my_celery'But, the same code working in local machine without any errors. Can any one suggest the solution?DockerfileFROM python:3.5-slim WORKDIR celery_sample ADD . /celery_sample RUN pip install -r requirements.txt EXPOSE 8000docker-compose.ymlversion: "3" services: web: build: context: . dockerfile: Dockerfile command: "python my_celery.py" ports: - "8000:8000" networks: - webnet volumes: - .:/celery_sample redis: image: redis networks: - webnet celery: image: celery:3.1.25 command: "celery worker -A my_celery -l INFO" volumes: - .:/celery_sample networks: - webnet networks: webnet:requirements.txtflask==0.10 redis requests==2.11.1 celery==3.1.25my_celery.py( kindly ignore the logic)from flask import Flask from celery import Celery flask_app = Flask(__name__) celery_app = Celery('my_celery') celery_app.config_from_object('celeryconfig') @celery_app.task def add_celery(): return str(int(10)+int(40)) @flask_app.route('/') def index(): return "Index Page" @flask_app.route('/add') def add_api(): add_celery.delay() return "Added to Queue" if __name__ == '__main__': flask_app.debug = True flask_app.run(host='0.0.0.0', port=8000)celeryconfig.py## Broker settings. BROKER_URL = 'redis://localhost:6379/0' ## Using the database to store task state and results. CELERY_RESULT_BACKEND = 'redis://localhost:6379/0'
couldn't start Celery with docker-compose
You don't need to disable SSL checking if you run the following terminal command:/Applications/Python 3.6/Install Certificates.commandIn the place of3.6, put your version of Python if it's an earlier one. Then you should be able to open your Python interpreter (using the commandpython3) and successfully runnltk.download()there.This is an issue whereinurllibuses an embedded version of OpenSSL that not in the system certificate store.Here's an answerwith more information on what's going on.
I am trying to download NLTK 3.0 for use with Python 3.6 on Mac OS X 10.7.5, but am getting an SSL error:import nltk nltk.download()I downloaded NLTK with a pip3 command:sudo pip3 install -U nltk.Changing the index in the NLTK downloader allows the downloader to show all of NLTK's files, but when one tries to download all, one gets another SSL error (see bottom of photo):I am relatively new to computer science and am not at all savvy with respect to SSL.My question is how to simply resolve this issue?Here is a similar question by a user who is having the same problem:Unable to download nltk dataI decided to post a new question with screenshots, since my edit to that other question was rejected.Similar questions which I did not find helpful:NLTK download SSL: Certificate verify faileddownloading error using nltk.download()
SSL error downloading NLTK data
Not a Node problem, but agitproblem. Upgraded git on Windows from 1.7.11 to 1.8.3 and the spawn worked.
This code works on Windows and on Mac OS X:var exec = require( 'child_process' ).exec exec( 'git clone[email protected]:user/myrepo.git' )But this code returns an "Access denied(publickey)" error from git when running on Windows, but not on Mac OS X:var spawn = require( 'child_process' ).spawn , child = spawn( 'git', [ 'clone', '[email protected]:user/myrepo.git' ], { env: process.env } ) child.on.stderr( 'data', function( data ) { console.log( data.toString() ) })I assume inspawni'm losing my connection to~/.ssh... but I thought sending inprocess.envwould work. By the way, thegit clonecommands work fine on Windows when typed into the command prompt directly.Anything obviously wrong?
github ssh public key not found with node.js child_process.spawn() on windows, but visible on child_process.exec()
There are two things to consider here.You can adjust this rule in Sonar and increase the number of authorized parameters. Say put it 10 instead of default (?) 7.UPD: the advice below is based on the old question version. It might be not applicable to the new question context any more.But generally you should reconsider your method interface. Having many arguments means that something can be wrong in your architecture and theSingle responsibility principlemight be broken.Say in your particular example, I would expect, that you can have an aggregate classOrder:public class Order { private CountryCode countryCode; private String orderId; private User user; private String orderId; private String item; private List<Person> persons; private ShippingAddress address; private PaymentMethod payment; private Product product; // ... }Which is much logical to manage instead of dealing with many parameters. Then your issues will be solved automatically:@GetMapping public void updateSomething(Order order) { ... }ShareFolloweditedMar 28, 2018 at 13:09answeredMar 28, 2018 at 13:03AndremoniyAndremoniy34.4k2121 gold badges137137 silver badges246246 bronze badges1Mark Seemann has a nice article discussing this in more detail:blog.ploeh.dk/2010/02/02/RefactoringtoAggregateServices–openshacMar 7, 2019 at 9:56Add a comment|
When I am scanning code with sonar lint the following code shows the bug as "Method has 8 parameters, which is greater than 7 authorized"@PutMapping("/something") public List<SomeList> updateSomeThing(@PathVariable final SomeCode code, @PathVariable final SomeId id, @PathVariable final String testId, @PathVariable final String itemId, @RequestBody final List<Test> someList, @RequestHeader("test") final String testHeader, final HttpServletRequest request, final SomeHeaders someHeaders)Note: This is a controller method we can not skip any parametersFYI: Eclipse showing a quick fix as squid:S00107Anybody have any idea how to resolve this bug?
Method has 8 parameters, which is greater than 7 authorized
After several guesses, I fixed it with bundle exec on the last line of the Dockerfile: CMD ["bundle", "exec", "ruby", "main_wow.rb"]
I have a very simple container running Sinatra in a Google Cloud Run. With no changes in the Dockerfile it recently stopped working. When I try to run it I get the error: /usr/local/lib/ruby/2.6.0/rubygems/core_ext/kernel_require.rb:54:in `require': cannot load such file -- sinatra (LoadError) from /usr/local/lib/ruby/2.6.0/rubygems/core_ext/kernel_require.rb:54:in `require' from main_wow.rb:1:in `<main>' Dockerfile: FROM ruby:2.6.4-alpine3.9 ENV APP_HOME /WOW WORKDIR $APP_HOME ADD Gemfile* $APP_HOME/ RUN gem install bundler RUN bundle install ADD main_wow.rb $APP_HOME ADD views/ $APP_HOME/views # Start server ENV PORT 3000 EXPOSE 3000 CMD ["ruby", "main_wow.rb"] Gemfile: source "http://rubygems.org" gem 'sinatra' gem 'i18n' First 10 lines of main_wow.rb: require "sinatra" require "net/http" require "json" require "i18n" I18n.config.available_locales = :en configure do set :public_folder, './views' set :bind, '0.0.0.0' end From what I could understand, it's trying to fetch the ruby gems from the major version 2.6.0, instead of 2.6.4. I have already tried to create a link, to set ruby version on the Gemfile but none seems to work...
How to fix Docker using the wrong Ruby path on Alpine
I assume with malloc(sizeof(*bufferData)) you meant malloc(helloworld.length) above (since that's the only malloc call I see in your example). The memory leak occurs when you clear your buffer: bufferData[i] = nil; This leaks because you allocated the buffer contents using malloc but did not free them later using free. Note that even under ARC you must free any malloced resources yourself. ARC only provides management for Objective-C object instances. The correct way to free the buffer here is: free(bufferData[i]); bufferData[i] = NULL;
I have a problem with memory leaks when using malloc in objective c. here's the code: .h (interface) { char *buffer[6]; NSInteger fieldCount; } -(void)addField:(NSString *)str; .m (implementation) -(void)addField:(NSString *)str { NSString *helloworld =str; if (bufferData[5] != nil) { /* clear buffer */ for (int i = 0; i<6; i++) { bufferData[i] = nil; } fieldCount = 0; } bufferData[fieldCount] = malloc(helloworld.length); char *ptrBuff = bufferData[fieldCount]; for (int i = 0; i<helloworld.length; i++) { *ptrBuff++ = [helloworld characterAtIndex:i]; } [self printBuffer]; fieldCount ++; } -(void)printBuffer { NSLog(@"buffer data %ld = %s",(long)fieldCount,bufferData[fieldCount]); } So basically I have 4 following classes below: ViewController -> UIViewController RootClass -> NSObject ChildClass1 -> RootClass Child -> Root Class Additionally: Init process of the three classes are inside -viewDidLoad method. Both childClass1 and childClass 2 have a timer to call -addField method at the same time. When I check my memory instrument, I have found a leak object every time it called -addField method. It refers to this statement: malloc(sizeof(*bufferData)); Can somebody help to solve my problem?
malloc and memory leaks in objective c
To call API usinghttpsyou need to configureSSLContextand set it to yourHttpClient. Refer below sample code. This is just the sample, you can load keystore and truststore in different way like from classpath, form file system etc.., make the changes accordingly.KeyStore trustStore = KeyStore.getInstance(KeyStore.getDefaultType()); KeyStore identity = KeyStore.getInstance(KeyStore.getDefaultType()); TrustManagerFactory trustManagerFactory = TrustManagerFactory.getInstance(TrustManagerFactory.getDefaultAlgorithm()); trustManagerFactory.init(trustStore); KeyManagerFactory keyManagerFactory = KeyManagerFactory.getInstance(KeyManagerFactory.getDefaultAlgorithm()); keyManagerFactory.init(identity, "password".toCharArray()); SSLContext sslContext = SSLContext.getInstance("TLSv1.3"); sslContext.init( keyManagerFactory.getKeyManagers(), trustManagerFactory.getTrustManagers(), null ); HttpClient httpClient = HttpClients.custom() .setSSLContext(sslContext) .setSSLHostnameVerifier(new DefaultHostnameVerifier()) .build();
I am invoking rest API from Java file using HttpClient. By using that I am able to call http API but not https API.I am getting below error, while calling httpsapi.javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target at sun.security.ssl.Alerts.getSSLException(Alerts.java:192) at sun.security.ssl.SSLSocketImpl.fatal(SSLSocketImpl.java:1946) at sun.security.ssl.Handshaker.fatalSE(Handshaker.java:316) at sun.security.ssl.Handshaker.fatalSE(Handshaker.java:310) at sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1639) at sun.security.ssl.ClientHandshaker.processMessage(ClientHandshaker.java:223) at sun.security.ssl.Handshaker.processLoop(Handshaker.java:1037) at sun.security.ssl.Handshaker.process_record(Handshaker.java:965) at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:1064) at sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1367) at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1395) at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1379)I want to call https API usingCloseableHttpClient.I am having certificate file that is having private key. Please let me know how I can use this private key to call API?
How to invoke APIs from Java using https client with ssl certificate
12 docker compose logs has a --no-log-prefix flag that removes the prefix. For example # start all services in background docker compose up -d # show logs for all services, without prefix (-f means follow the logs) docker compose logs -f --no-log-prefix # or, for a single service called foo docker compose up foo -d docker compose logs foo -f --no-log-prefix See the documentation here Share Improve this answer Follow edited Aug 19, 2021 at 23:13 answered Aug 19, 2021 at 23:00 davnicwildavnicwil 29.6k1818 gold badges111111 silver badges130130 bronze badges 5 2 Works for me, no need for sed magic, great answer! I find it peculiar that this option is missing from the logs doc page docs.docker.com/compose/reference/logs – domjancik Oct 6, 2021 at 6:41 What about docker stack? This does not work with docker service logs :/ – xeruf Mar 27, 2023 at 21:31 what do you mean by docker stack, @xeruf? – davnicwil Mar 27, 2023 at 21:36 I meant swarm mode, sorry. – xeruf Mar 29, 2023 at 22:47 Ah, in that case I don't know, sorry :-) Haven't used docker swarm. Might be worth posting as a separate question? – davnicwil Mar 30, 2023 at 0:28 Add a comment  | 
docker-compose inserts prefixes like service_1 | in the beginning of every line of output. I use this container for testing and this kind of improvement (very useful in other cases) mess my debugging logs and I want to remove it for this service. Documentation have no information about this question. Any ideas? my docker-compose.yml: version: '3' services: rds: image: postgres:9.4 ports: - ${POSTGRES_PORT}:5432 environment: POSTGRES_USER: ${POSTGRES_USER} POSTGRES_PASSWORD: ${POSTGRES_PASSWORD} dynamo: image: iarruss/dynamo-local-admin:latest ports: - ${DYNAMODB_PORT}:8000 python: container_name: python image: iarruss/docker-python2:latest depends_on: - rds - dynamo environment: POSTGRES_HOST: rds POSTGRES_USER: ${POSTGRES_USER} POSTGRES_PASSWORD: ${POSTGRES_PASSWORD} DYNAMODB_HOST: dynamo Edit: Clarify expected result Current output: python | python | collected 511 items python | python | tests/test_access.py python | Expected output: collected 511 items test_access.py
How can I remove prefix with service name from logs?
Which of the following de-allocation strategies creates a memory leakage? In my pedantic opinion the correct answer would have to be option A, it creates a memory leak because it deallocates mptr, making mptr[i] pointers inaccessible. They cannot be deallocated afterwards, assuming that the memory is completely inaccessible by other means. Option B does not lead to memory leak per se, mptr is still accessible after you free mptr[i] pointers. You can reuse it or deallocate it later. Memory leak would only occur if and when you loose access to the memory pointed by mptr. I believe the question is somewhat ill-formed, if the question was "Which option would you use to correctly deallocate all the memory?", then yes, option C would be correct. I do agree that the correct strategy to deallocate all the memory is B + A, albeit A first will cause immediate memory leak whereas mptr0 first will allow for later deallocation of mptr1, as long as the access to the memory pointed by it is not lost. I don't see the point of this code, if you already have allocated memory with calloc (and initialized it) why would you go as far as to allocate each cell with a cycle? Am I wrong to believe that? The allocation is correct. mptr2 Check this thread for more info.
In a recent exam question I got this code with following options: char **mptr, *pt1; int i; mptr = calloc(10, sizeof(char*)); for (i=0; i<10; i++) { mptr[i] = ( char *)malloc(10); } Which of the following de-allocation strategies creates a memory leakage? A. free(mptr); B. for(i = 0; i < 10; i++): { free(mptr[i]); } C. All of them The answer is C. But it seems to me that applying free(mptr); would suffice covering the memory leak, and B as well, although I'm less sure of that, can someone explain me why all of them would cause a memory leak? I'm guessing the C options expects that each operation ( A or B ) are applied separatly. P.S. I don't see the point of this code, if you already have allocated memory with calloc (and initialized it) why would you go as far as to allocate each cell with a cycle? Am I wrong to believe that?
Allocating a pointer with calloc, and then dynamically allocate each cell with malloc = memory leakage?
I agree with @rubenvb that you're going to have to clone the repo and do the count locally. I don't know a tool which will get the number of files for each revision, so you're going to have to roll your own.To get the count at the current checked-out commit, you could rungit ls-files | wc -lwhich will give you a total for the repo at that commit.To get the all-time count, you'd need to loop over all the commits reachable from the first commit, running that command each time. You might try pushing the output ofgit ls-filesinto an array each time, and maintain a "global" array while looping through all commits. (This is likely to take some time on a big repo like jQuery.) Then you can count the size of the array afterward.The number is going to be pretty subjective depending on what you decide to count, though. Should you count a file which moves from one directory to another in a commit? (In the method I've just outlined, it will be counted as two different files.) Do you count branches which haven't been merged to master, or just any commit reachable from the HEAD of the current master branch? That's up to you.
Is it possible to get the number of all files of all commits in a repository on GitHub?I don't use Git myself, I just need to know the number of some other big repositories.Let's take for exampleJQueryUpdateThere are files like:.editorconfig.gitattributes...and of course folder like:buildexternal...with even more files.I need to know the total number of those files.And, as a bonus, I would like to know the total number of files ever existed in this repository.Is it possible to find these numbers on GitHub?
Number of files in a GitHub repository
You could do this with bash only. No need for php:find /your/directory -type f -mmin +720 -exec rm {} \;--mmin parameter is file age in minutesIf you are on a shared server you could still try to execute this with shell_exec(), most hosters allow thisAlso you forgot to skip '.' and '..' in the loop
I have looked around this site and others to find a simple php script that I can use with cron to remove files over X days old in a directory. There seem to be plenty but none work for me. I am on a shared server (G C Solutions) and the hosters are great but the packageI am on does not include shell access so I don't think I can use .sh or bash commands.I have a neat php script to do a Mysqldump of my database, copy it to a directory in my home area ( /home/mysite/backups ) and send me a copy via email - this all works fine. Now I was trying to run a script that would just leave 5 days worth of backups in the backups diretory. I am trying this script at present:-<?php $dir = opendir('/home/mysite/backups"); if ($dir) { // Read directory contents while (false !== ($file = readdir($dir))) { // Check the create time of each file (older than 5 days) if (filemtime($file) < (time() - 60*60*24*5)) { unlink($file); } } } ?>It doesn't work, my cron setting string is :- php -q /home/mysite/public_html/scripts/delold6.phpI've tried running it from above the the html_public, no joy, the backup directory rights are set to 755, when my backup script copies the dump to this directory files are set at 644. I have tried chmoding these to 777 - no joy. Can anyone help here. *find /path/to/your_directory -mtime +5 -exec rm -f {} \;* does not work from cron either.
Use a PHP script and cron to delete files in a directory over x days old
22 For anyone stumbling across this, it seems that certain v2 versions (2.2.7 in my case) fail silently if less isn't installed. In these cases, setting AWS_PAGER to an empty string should fix the problem. Later AWS CLI versions (e.g. 2.2.18) are decidedly more helpful: aws sts get-caller-identity Unable to redirect output to pager. Received the following error when opening pager: [Errno 2] No such file or directory: 'less' Learn more about configuring the output pager by running "aws help config-vars". Share Improve this answer Follow answered Jul 13, 2021 at 11:51 soulshakesoulshake 8501010 silver badges1212 bronze badges 1 7 Thank you, export AWS_PAGER="" did the trick! – 00schneider Nov 19, 2022 at 11:15 Add a comment  | 
I've been using aws cli on this laptop for a while to interact with s3 buckets. Suddenly, the tool has stopping printing any output whatsoever: C:\>aws C:\>aws --debug C:\>aws --help C:\>where aws C:\Users\Andrew\AppData\Roaming\Python\Python37\Scripts\aws C:\Users\Andrew\AppData\Roaming\Python\Python37\Scripts\aws.cmd This is in an administrator command prompt, but it's the same in an admin powershell prompt. Windows version 10.0.18362 Build 18362 - I took the anniversary update a few weeks ago but am not sure if it's correlated or not. aws cli on my other (Win 10, anniversary update) machine, using the same authentication, works fine. I've tried straight-up uninstalling and reinstalling aws cli, but after the reinstall I can't even get it to print anything to authenticate me. Any ideas? Any more information I can give you?
aws cli has no output
1 Verified means the commit was signed with a GPG key known to Github. To "verify" commits you need to sign them and the only way to do that is to do interactive rebase during which sign every commit. All rebased commits will be changed so you have to force-push the branch. Share Improve this answer Follow answered May 16, 2019 at 20:37 phdphd 87.3k1414 gold badges129129 silver badges180180 bronze badges 3 I know what means 'verify', but you didn't answer my question. If I will rebase everything, so every commit will be signed, will they be 'on the same place' (same timestamp/date etc.) or these will be counted as new commits? – DeBos May 16, 2019 at 20:42 "All rebased commits will be changed ..." That means they will count as new commits. – Tyler Marshall May 16, 2019 at 21:04 Commits in a branch are shown according to their position in the branch, i.e. according to DAG. – phd May 16, 2019 at 23:21 Add a comment  | 
I have some unverified commits here: https://github.com/DeBos99/portable-strlen/commits/master Is there any way to verify this commits and keep them at the end of the list of commits?
Git verify pushed commits
I think you still have all the repository remotes locally configured.Try in the repo folder to see the remote repositories :git remote -vAnd delete the remote with :git remote rm <remote-name>
Initially, i have 2 repositories on git hub. I deleted one and kept the other.https://i.stack.imgur.com/a4Ba7.jpgHowever, android studio still thinks i have both of them.https://i.stack.imgur.com/mrlbx.jpgHow can i fix this?Thank you.
Deleted a repository in github but it still shows up in android studio? how do i remove it?
follow this tutorial to for auto renewalhttps://neurobin.org/docs/web/fully-automated-letsencrypt-integration-with-cpanel/You can install Lets encrypt SSL using cPanel ssl/tls -->Install and Manage SSL for your site (HTTPS) --> Manage SSL Sites. To renew certificate you need to regenerate it using your account key and Certificate provided by Lestencrypt in first time. I have done successfully that on GreenGeeks shared hosting help ofhttp://wayneoutthere.com/how-to-lets-encrypt-cpanel-shared-hosting/. You can usehttps://zerossl.com/free-ssl/#crtto generate Certificates and copy to Cpanel.
I have a website running on a shared hosting provider (ie. without SSH access). CPanel is installed. Is it possible to install (and just as importantly, renew) a Let's Encrypt certificate automatically without SSH access? Perhaps a CPanel plugin or cron job (for automatic renewals)?
Let's Encrypt certificate automatic installation and renewal without SSH access?
Issue here was due to the way permissions were changed recently on our server. Global administrators group do not have administer permissions for any projects and Administer group is created for each of the enterprise projects on boarded to our server.ShareFollowansweredJan 30, 2018 at 9:35sandeep manthrisandeep manthri8188 bronze badgesAdd a comment|
We've upgraded our SonarQube server from 6.1 to 6.5 version and post upgrade we aren't able to see the administration options for any of the projects as earlier. We see only few options enabled "Quality Profile & Quality Gate". However, we can browse to each of the tabs by creating urls in the browser. Its just that the UI doesn't show these options.Can someone let us know what could've gone wrong and help us resolve this issue.This would play a major role in helping the customers in managing their projects.
Administration tab doesn't show all the options
There is no pixie dust.You need to write your code very carefully, in a cache friendly manner. Have a look atCPU Caches and Why You Care, absolutely positively get a copy ofThe Software Optimization Cookbookand read it carefully end-to-end.As a side, OS platforms allow for process memory to be pinned (not swappable, which is a differnet topic from L2 code/data caching, but you're quite far from proving that L2 cache is the culprit in your case anyway...) but 101% cases the OS knows better than your app and preventing it from swapping results in worse performance, not better.
I have a program composed of two parts:a virtual machine of a graphical programming language,image processing routines.The problem is that the virtual machine works fast enough as long as there are no big images processed. The drop of the performace of the virtual machine is about the factor of 5 after processing of a big image. I guess this is because memory buffers of the objects belonging to the virtual machine get removed from cache when a big image comes. Normally, a processor keeps a separate cache for code, separate for data, but not when my program is interpreted.QUESTION: Is there any way to make it the same for interpreted code, i.e. to mark somehow a memory buffer as high-priority for cache memory, or to allocate somehow a memory buffer that will be guaranteed to stay in cache?Let me add, that although image processing is much slower than intepreting programs, there happen to be cases when the second part becomes critical - think for example of postprocessing a set of points detected on an image - these are simple arithmetical operations that are too slow on a virtual machine then.
How to force keeping some memory buffers in cache?
Two paths:ConfigureNginx to serve on 443 with TLS. Configure GCP firewallto allow for httpswith tags.With tags, configure FW rules for the instance to only serve 8080 to GCP Load Balancers andhave HTTP(S) Load Balancingserve the content via TLS to the public.In any case you'll have annoying TLS issues without a DNS name - so you should get one. You should alternatively look intoserving Django from App Engine Standard.
I have nginix+django server on google cloud virtual machine which is running at a specific port(8080). I am able to access the service byhttp://external_ip:8080. But I'm not able to access it over "https". I dont have a domain name. For our application it is not necessary as it is just a rest api to perform some tasks. I am relatively new to these terms like ssl certificate, domain name, nginix ... etc. It would be great if someone can help me out. Thanks in advance.
SSL certificate for the ip adress [Nginix+django server]
<div class="s-prose js-post-body" itemprop="text"> <p>There are two things happening here.</p> <p>A Dockerfile that starts <code>FROM scratch</code> starts from a base image that has absolutely nothing at all in it. It is totally empty. There is not a set of base tools or libraries or anything else, beyond a couple of device files Docker pushes in for you.</p> <p>The <code>ENTRYPOINT echo ...</code> command gets rewritten by Docker into <code>ENTRYPOINT ["/bin/sh", "-c", "echo ..."]</code>, and causes the <code>CMD</code> to be totally ignored. Unless overridden with <code>docker run --entrypoint</code>, this becomes the main process the container runs.</p> <p>Since it is a <code>FROM scratch</code> image and contains absolutely nothing at all, it doesn't contain a shell, hence the "/bin/sh: no such file or directory" error.</p> </div>
<div class="s-prose js-post-body" itemprop="text"> <p>I want to understand how CMD and ENTRYPOINT works. So, I just created a very simple <code>Dockerfile</code></p> <pre><code>FROM scratch CMD echo "Hello First" ENTRYPOINT echo "Hello second" </code></pre> <p>Then I build image of this :</p> <pre><code>docker build -t my_image . </code></pre> <p>The logs are as below: </p> <blockquote> <p>Step 1/3 : FROM scratch ---&gt; Step 2/3 : CMD echo "Hello First" ---&gt; Using cache ---&gt; 9f2b6a00982f Step 3/3 : ENTRYPOINT echo "Hello second" ---&gt; Using cache ---&gt; 1bbe520f9526 Successfully built 1bbe520f9526 Successfully tagged my_image:latest SECURITY WARNING: You are building a Docker image from Windows against a non-Windows Docker host. All files and directories added to build context w ill have '-rwxr-xr-x' permissions. It is recommended to double check and reset permissions for sensitive files and directories.</p> </blockquote> <p>When I create a container of this image it returns: </p> <pre><code>docker run my_image </code></pre> <p>Error is:</p> <blockquote> <p>docker: Error response from daemon: OCI runtime create failed: container_linux.go:344: starting container process caused "exec: \"/bin/sh\": stat /b in/sh: no such file or directory": unknown.</p> </blockquote> <p>Can someone please help me about error?</p> </div>
Starting container process caused "exec: \"/bin/sh\": stat /bin/sh: no such file or directory": unknown
These scripts should be placed in the .htaccess file.//*301 Redirect: xyz-site.com to www.xyz-site.com RewriteEngine On RewriteBase / RewriteCond %{HTTP_HOST} !^www.xyz-site.com$ [NC] RewriteRule ^(.*)$ http://www.xyz-site.com/$1 [L,R=301] //*301 Redirect: www.xyz-site.com to xyz-site.com RewriteEngine On RewriteBase / RewriteCond %{HTTP_HOST} !^xyz-site.com$ [NC] RewriteRule ^(.*)$ http://xyz-site.com/$1 [L,R=301] //*301 Redirect: Redirecting Individual pages Redirect 301 /previous-page.html http://www.xyz-site.com/new-page.htmlOr you can useRedirect with PHP<?php Header( "HTTP/1.1 301 Moved Permanently" ); Header( "Location: http://www.xyz-site.com" ); exit(0); ?>
I have a blog www.SITE_NAME.com which is hosted in blogger.com, Its almost 4 year old and have better search engine ranking. Most of the traffic came through Google. Now i am redesigning my site in drupal.So i want to redirect all older links with a 301 to new pages , Since i have nearly 700 pages , i want some logic to apply (and some case i want to redirect manually) . Which is better, using Apache or php? Or any other suggestion?Note : since my old site is in blogger.com, its path is something like thiswww.SITE_NAME.com/2007/08/music.htmland my new path will be likewww.SITE_NAME.com/DYNAMIC_PATH
301 redirect - apache or php for my case?
Using a named volume (or more specifically a volume created using the Docker Engine volume API) with a defined host path doesn't have much of an advantage over the method you've used. Technically, it is "easier" to create a new container, but only because you no longer have to remember the path. You can also use the volume API to "manage" the volume independently from the application container, but this is equally easy using the docker container API.If you insist, to create a named volume with an absolute host path, you need to use a volume-driver. I would suggest local-persist. It is quite simple to install and works well.https://github.com/CWSpear/local-persistShareFollowansweredJan 3, 2017 at 16:39James MoserJames Moser49622 silver badges66 bronze badgesAdd a comment|
I have a Docker web application with its database which I have set up:-v /home/stephane/dev/php/learnintouch/docker/mysql/data:/usr/bin/mysql/install/dataIt works fine but I wonder if that is the recommended way to go.For I see we can also create a named volume by giving a name instead of an absolute path on the host:-v learnintouch-data:/usr/bin/mysql/install/dataBut then, how can I associate the volume name learnintouch-data with the host location at/home/stephane/dev/php/learnintouch/docker/mysql/data?Here is my currentdocker-compose.ymlfile:learnintouch.com-startup: image: stephaneeybert/learnintouch.com-startup container_name: learnintouch.com-startup ports: - "80:80" links: - mysql - redis - nodejs-learnintouch nodejs-learnintouch: image: stephaneeybert/nodejs-learnintouch container_name: nodejs-learnintouch ports: - "9001:9001" links: - redis mysql: image: stephaneeybert/mysql:5.6.30 container_name: mysql ports: - "3306:3306" environment: - MYSQL_ROOT_PASSWORD=root volumes: - "/home/stephane/dev/php/learnintouch/docker/mysql/data:/usr/bin/mysql/install/data" redis: image: stephaneeybert/redis:3.0.7 container_name: redis ports: - "6379:6379"
Database in a Docker application
You can't specify the SerDe in the Glue Crawler at this time but here is a workaround...Create a Glue Crawler with the following configuration.Enable 'Add new columns only’ - This adds new columns as they are discovered, but doesn't remove or change the type of existing columns in the Data CatalogEnable 'Update all new and existing partitions with metadata from the table’ - this option inherits metadata properties such as their classification, input format, output format, SerDe information, and schema from their parent table. Any changes to these properties in a table are propagated to its partitions.Run the crawler to create the table, it will create a table with "org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe" - Edit this to the "org.apache.hadoop.hive.serde2.OpenCSVSerde".Re-run the crawler.In case a new partition is added on crawler re-run, it will also be created with “org.apache.hadoop.hive.serde2.OpenCSVSerde”.You should now have a table that is set to org.apache.hadoop.hive.serde2.OpenCSVSerde and does not reset.
Every time I run a glue crawler on existing data, it changes the Serde serialization lib toLazySimpleSerDe, which doesn't classify correctly (e.g. for quoted fields with commas in)I then need to manually edit the table details in the Glue Catalog to change it toorg.apache.hadoop.hive.serde2.OpenCSVSerde.I've tried making my own csv Classifier but that doesn't help.How do I get the crawler to specify a particular serialization lib for the tables produced or updated?
Specify a SerDe serialization lib with AWS Glue Crawler
In Git, changes are done locally and then must be pushed to the remote. This lets you do your work locally before deciding it is ready to share it with others. git flow release finish will finish your release locally. You then have to push your finished release. git flow does not do this push for you. The docs have an example... Finishing a release is as simple as: $ git flow release finish 1.4.0 This will: Merge changes into the master branch, Create a 1.4.0 tag, Merge changes into the develop branch, Remove your local release\1.4.0 branch. Once your release has been finished; you’ll have to push master, develop and tags and also remove remote release/1.4.0 branch (if any): (master) $ git push (master) $ git push --tags (master) $ git checkout develop (develop) $ git push $ git push {github_username} :release/1.4.0
Initially I am cloning a Git repo to my local and then doing: git flow init . I am able to successfully create feature branch and merge to develop by creating pull request. Now I use: git flow release start <branch_name> and push the release branch to remote. Changes are fine so I do: git flow release finish <branch_name> . It executes fine on local and code is merged to develop and master branch, tag is cut and release branch is deleted, but on remote repo changes are not automatically merged to master branch but are back merged to develop branch only. What is the possible issue it did not merge into master branch of remote repo?
`git flow release finish` does not merge code in `master` branch on remote repo
Git is fast enough.If you want them in your repository - you will have to add, commit and push them once. If they don't change, they will never again be transferred and will NOT influence the pull and, moreover, merge time.It is because git stores snapshots of files and not their diffs.Say, you've got a file. It has sha1 ofabcdef123456. Imagine a conversation between local and remote repos:First push:Local: "I've got abcdef123456 here!"Remote: "Please transfer it to me"Next pushesLocal: "I've got abcdef123456 here!"Remote: "Heh, that's boring. I've got it already."
I am currently working on a project that has a directory with a lot of small files within it that don't change. I know that I can add it to the git ignore but I still want them in my repo. Will zipping the directory shorten the time it takes to pull/merge and if so are there any other ways to shorten the process?
Git check compare is slow
1 As i know there is two metrics which allow you to monitor OOM. The first one is used for tracking OOMKilled status of your main process/pid. If it breach the limit pod will be restarted with this status. kube_pod_container_status_last_terminated_reason{reason="OOMKilled"} And the second one for gathering total count of OOM events inside the container. So every time some child process or other process will breach the RAM limit they will be just killed and metric counter increased. But the container will be working as usual. container_oom_events_total Share Improve this answer Follow answered Jul 14, 2023 at 8:50 Organ2Organ2 1111 bronze badge Add a comment  | 
I have a spark executor pod, which when goes to OOMKilled status, I want to alert it. I am exporting spark metrics using prometheus to grafana. I have tried some queries to kube_pod_container_status_last_terminated_reason{reason="OOMKilled"} kube_pod_container_status_terminated_reason{reason="OOMKilled"} They don't seem to give proper results. I am cross checking the result using humio logs, which is logging the OOMKilled properly. container_memory_failures_total{pod="<<pod_name>>"} Even this is not able to capture the problems of OOMKilled which is in sync with the humio logs. Is there any other proper metric to catch OOMKilled ?
How to get metric for a spark pod OOMKilled using prometheus
You can useRESTORE FILELISTONLY FROM DISK = N'C:\Path\YourBackup.bak'to check the space used by the DB in the backup upon restoration. Basically, this will allow you to see how big it'll be, without actually restoring the backup.
I do not understand this error message:There is insufficient free space on disk volume 'S:\' to create the database. The database requires 291.447.111.680 additional free bytes, while only 74.729.152.512 bytes are available.It is true I have 74GB free on my disk S, but I'm trying to restore a backup file having only 2.4 GB.Is it possible a backup of 2GB to fill 291 GB?Later edit: Source database before backup has 52GB (data) + 225G (log).
Could not restore a database
It depends on your definition of "last". for a given branch (like master), GET /repos/:owner/:repo/commits/master is indeed the last (most recent) commit. But you can also consider the last push event: that would represent the last and most recent commit done (on any branch), pushed by a user to this repo.
Which is the best way to get the latest commit information from a git repository using GitHub API (Rest API v3). Option 1: GET /repos/:owner/:repo/commits/master Can I assume that the object 'commit' of the response is the latest commit from branch master? Option 2: GET /repos/:owner/:repo/git/commits/5a2ff Or make two calls, one to get the sha by getting the HEAD ref from master and then get the commit information using the sha returned.
How can I get last commit from GitHub API
You can build onshaunc's idea to use thelookupfunction to fix the original poster's code like this:apiVersion: v1 kind: Secret metadata: name: db-details data: {{- if .Release.IsInstall }} db-password: {{ randAlphaNum 20 | b64enc }} {{ else }} # `index` function is necessary because the property name contains a dash. # Otherwise (...).data.db_password would have worked too. db-password: {{ index (lookup "v1" "Secret" .Release.Namespace "db-details").data "db-password" }} {{ end }}Only creating theSecretwhen it doesn't yet exist won't work because Helm will delete objects that are no longer defined during the upgrade.Using an annotation to keep the object around has the disadvantage that it will not be deleted when you delete the release withhelm delete ....
I want to generate a password in a Helm template, this is easy to do using therandAlphaNumfunction. However the password will be changed when the release is upgraded. Is there a way to check if a password was previously generated and then use the existing value? Something like this:apiVersion: v1 kind: Secret metadata: name: db-details data: {{ if .Secrets.db-details.db-password }} db-password: {{ .Secrets.db-details.db-password | b64enc }} {{ else }} db-password: {{ randAlphaNum 20 | b64enc }} {{ end }}
How not to overwrite randomly generated secrets in Helm templates
Let's say you are starting a new repository. You'd have to start local first, right? So, git init -> Initializes a repository on your local computer. (assuming you started with an empty folder) Now you have an empty repository. Now it's time to add lots and lots of awesome code/content. Once you have some code, you will commit. git commit -> Git will still keep all your changes locally; but remembers all the changes you made in this commit. Let's say you make a couple of more changes, and want to save your work. So, you'll run the commit again. Git again saves your changes, but in commit #2, still local to your computer. Now you are ready to share your work with other people. Because Github repo's are online (typically), you will have to push (i.e. upload) your changes to a remote repository. Difference between commit and push is, the first one keeps all your changes locally on your computer (no one else in your team has access to your changes or commits), and push will make your code available to everyone. Hope that's clear!
I'm using github to work on a project with two other people and am getting very confused about the whole commit thing, and nothing I'm reading is helping me understand. I get that commit records changes that you've made to a local repository... but then why are my group members' commits showing up on the online repository? Can you commit to both the local repository on your computer as well as an online repository? If you can commit to an online repository, what is the difference between doing that and simply using git push to push your changes online? Thank you kindly.
Confused about commits on github
Try to change the timezone in thephp.iniconfiguration file, and then restart the apache service. You should havephp.inisomewhere inside your WAMP installation folder.EDIT:You might have the php.ini file inside the folder:/wamp/bin/php/phpX.X.Xwhere phpX.X.X is your php version.Look for the "date.timezone" line and change it to something like this:date.timezone = "America/Los_Angeles"You can find the supported timezones here:http://www.php.net/manual/en/timezones.phpMore technical information is available here:http://php.net/manual/en/datetime.configuration.phpThe .htaccess file has a local range, normally to the folder and sub-folders where the file is created. Changing the php.ini file makes the changes global to your PHP installation.
I am using WAMP server running PHP. At a particular step I am trying to capture system time and add it to the database with the following query$strSQLInsert = "UPDATE track SET State = 'Repeat' , DateTime = '" . date("m/d/Y h:i:s a") . "', where AccID like '". $values['SampleID'] ;but the time stamp is way off than my system time. The date is okay. I googled and found out that I can update my .htaccess with this lineSetEnv TZ America/Los_AnglesBut I couldn't find where htaccess is. How do I get the correct time stamp.
Apache time stamp incorrect
11 Use Route53 Create a record set with these values: Name: www.example.com Type: A - IPv4 address Alias: Yes Alias Target: [click and choose your elastic load balancer] Alias Hosted Zone ID: [auto fills in when you choose the above, you can match this to your logs] Without using Route53, you may be fighting an uphill battle, I'm not sure. Share Follow answered Aug 13, 2013 at 22:43 krosskross 3,65722 gold badges3232 silver badges6161 bronze badges 3 Hey @kross I did it with CNAME as mentioned in the EBS doc. It works. Thanks for your interest. – LittleLebowski Aug 15, 2013 at 5:32 Hi @kross! I just did what you posted, and for some reason is not making any changes on my domain. I am wondering if it's a matter of time to make the changes or am I doing something wrong? – Pompeyo Mar 3, 2014 at 20:43 1 @pompeyo if you had the domain previously listed in your Route53 record set, then you may have to wait for DNS propagation. If not, it should be established immediately. Make sure you selected the right Alias Target. – kross Mar 4, 2014 at 0:14 Add a comment  | 
I'm following the instructions Using Custom Domains with AWS Elastic Beanstalk to map a custom domain to an AWS Elastic Beanstalk URL. My Elastic Beanstalk URL is as follows: http://myenvironment-specific-string.elasticbeanstalk.com/ I've created a CNAME record that says: www.example.com myenvironment-specific-string.elasticbeanstalk.com 8 hrs I've also looked up the CNAME using MxToolBox' CNAME Lookup tool where it shows it correctly. But when I try www.example.com, it doesn't show up. Am I missing something? I'm stuck and this is racking my brains apart! Help me! :(
How to map custom domain to an AWS Elastic Beanstalk URL?
Theofficial recommendationis to ignorevendor/:Tip:If you are using git for your project, you probably want to addvendorinto your.gitignore. You really don't want to add all of that code to your repository.Make sure to include both yourcomposer.jsonandcomposer.lockfiles, though.
I want to use the autoloader generated by composer for my unit tests to load classes automatically.Now I don't know if I should commit my vendor directory to my git repo. A pro is that everyone who clones my repo immediately can run the phpUnit tests. A con is that I ship a lot of proprietary code with my repo.Should I insist that the user who clones my repo has to runcomposer installfirst and therefor has to have composer "installed"?Is it a solution to don't commit vendor directory into my git repo but pack it into a release branch so that my application runs out of the box?
Should I ship my vendor directory of composer with GIT
The two options are not so different after all. The only difference is that in option 2, you only have one copy of the code on your disk.In any case, you still need to run different worker processes for each instance, as Redmine (and generally most Rails apps) doesn't support database switching for each request and some data regarding a certain environment are cached in process.Given that, there is not really much incentive to share even the codebase as it would require certain monkey patches and symlink-magic to allow the proper initialization for the intentional configuration differences (database and email configuration, paths to uploaded files, ...). The Debian package does that but it's (in my eyes) rather brittle and leads to a rather non-standard system.But to stress again: even if you share the same code on the disk between instances, you can't share the running worker processes.
I'm studying the best way to have multiple redmine instances in the same server (basically I need a database for each redmine group).Until now I have 2 options:Deploy a redmine instance for each groupDeploy one redmine instance with multiple databaseI really don't know what is the best practice in this situation, I've seen some people doing this in both ways.I've tested the deployment of multiple redmines (3 instances) with nginx and passenger. It worked well but I think with a lot of instances it may not be feasible. Each app needs around 100mb of RAM, and with the increasing of requests it tends to allocate more processes to the app. This scenario seems bad if we had a lot of instances.The option 2 seems reasonable, I think I can implement that with rails environments. But I think that there are some security problems related with sessions (I think a user of site A is allowed to make actions on site B after an authentication in A).There are any good practice for this situation? What's the best practice to take in this situation?Other requirement related with this is: we must be able to create or shut down a redmine instance without interrupt the others (e.g. we should avoid server restarts..).Thanks for any advice and sorry for my english!Edit:My solution: I used a redmine instance for each group. I used nginx+unicorn to manage each instance independently (because passenger didn't allow me to manage each instance independently).
Multiple redmine instances best practices
Would you be able to perhaps provide a code sample along with a stack trace detailing your error? This will help in better visualizing what you may be trying to achieve.Thisdocumentationprovides details on deleting entire collections or subcollections in Cloud Firestore. If you are using a larger collection, you have the option to delete data in smaller batches to avoid out-of-memory errors. Thecode snippetis somewhat simplified, but provides a method on deleting a collection in batches.ShareFollowansweredMay 17, 2020 at 1:58Jan LJan L26111 silver badge55 bronze badgesAdd a comment|
I have a function in Java which is reading the data from firestore collection and deleting them with fixed batch size. I want to execute this from dataflow, but when I add this in .apply I am getting compilation error: "The method apply(String, PTransform) in the type Pipeline is not applicable for the arguments (String, void)"How can we call such a function inside apply
Delete Firestore Collection Using Dataflow & Java
41 After the latest update, now we have only one port which is 4566. Yes, you can see your file. Open http://localhost:4566/your-bucket-name/you-file-name in chrome. You should be able to see the content of your file now. Share Improve this answer Follow edited Mar 2, 2023 at 8:04 answered Jan 12, 2021 at 12:35 Debu ShinobiDebu Shinobi 2,2221919 silver badges2323 bronze badges 2 1 Not sure why, but in my case, it returns 403. Just to access the bucket is ok localhost:4566/your-funny-backet-name But not able to see the content of the file. – Max Oct 12, 2021 at 4:24 1 @Max permissions can be set at the bucket or object level. You should check the permissions on the object itself to see what it is. – Jimmy Vo May 8, 2022 at 17:30 Add a comment  | 
I've setup a localstack install based off the article How to fake AWS locally with LocalStack. I've tested copying a file up to the mocked S3 service and it works great. I started looking for the test file I uploaded. I see there's an encoded version of the file I uploaded inside .localstack/data/s3_api_calls.json, but I can't find it anywhere else. Given: DATA_DIR=/tmp/localstack/data I was expecting to find it there, but it's not. It's not critical that I have access to it directly on the file system, but it would be nice. My question is: Is there anywhere/way to see files that are uploaded to the localstack's mock S3 service?
Is there a way to see files stored in localstack's mocked S3 environment
See the code ofdocker start:Ln99: resp, errAttach := dockerCli.Client().ContainerAttach(ctx, c.ID, options) Ln136: dockerCli.Client().ContainerStart(ctx, c.ID, startOptions)docker startconsists separateattach&startoperation, if the container already start, just skip thisstartoperation, butattachstill works there.So, they are same in this scenario. BTW, from October 2014, docker team suggest to usedocker execto enter the container.
The Docker documentation indicates thedocker attachcommand is used to attach to arunningcontainer (atdocker container attach) and thedocker startcommand is used to startstoppedcontainers (atdocker container start).However, I tried applyingdocker start -aito arunningcontainer, and it looks that it can successfully attach to the running container.So my question is"Aredocker start -aianddocker attachthe same when they are used to attach to a running container?".
"docker attach" vs "docker start -ai" for a running container
Since there isn't malloc in opencl device and also structs are used in buffers as an array of structs, you could add index of it so it knows where it remains in the array. You can allocate a big buffer prior to kernel, then use atomic functions to increment fake malloc pointer as if it is allocating from the buffer but simply returning an integer that points to last "allocated" struct index. Then, host side would just use the index instead of pointer.If struct alignments become an issue between host an device, you can add indexing of fields too. Such as starting byte of a field A, starting byte of a field B, all compacted in a single 4-byte integer for a struct having 4 used fields except indexes.Maybe you can add a preprocess stage:host writes an artificial number to a field such as 3.1415device checks floating points in struct for all byte offsets until it finds 3.1415device puts the found byte offset to an array and sends it to hostthen host writes float fields in a struct starting from that byte offsetso host and device become alignment compatible, uses same offset in all kernels that get a struct from hostmaybe opposite is betterdevice puts 3.14 in a field of structdevice writes the struct to an array of structhost gets the bufferhost checks for 3.14 and finds byte offsethost writes and fp number starting from that offset for future workwhich would need both your class and its replicated struct on host+device side.You should also look for "sycl api".
I have an OpenCL C++ code working on the Intel Platform. I do have an idea that pointers are not accepted within a structure on the Kernel End. However, I have a Class which utilizes the Self-Referencing Pointer option within it. Now, I am able to use a structure and replicate the same for the structure on the host side but I am not able to do the same on the device side.For example as follows:Class Classname{ Classname *SameClass_Selfreferencingpointer; } On the Host side I have done the same for the structure as well: struct Structurename{ Structurename *SameStructure_Selfreferencingpointer; }Could someone give an alternate option for this implementation for the device side?Thank you for any help in advance.
Self Referencing Pointer in OpenCL
If you have the login (username) of the user/group, you can useorganization&userto search respectively an organization & a user and check which of the 2 fields is notnull:{ org: organization(login: "google") { name members { totalCount } } user: user(login: "google") { name login } }which gives :{ "data": { "org": { "name": "Google", "members": { "totalCount": 1677 } }, "user": null } }or using a variable forlogininput,try it in the explorerFor the Rest API v3, it's much simpler since it doesn't distinguish the user from the organization :https://developer.github.com/v3/users:curl -s 'https://api.github.com/users/google' | jq -r '.type'Organizationcurl -s 'https://api.github.com/users/torvalds' | jq -r '.type'User
Take the following Github URL:https://github.com/googleHow can I determine whethergoogleis a user or an organization?I need to know to this for querying Github's graphql API in the correct way.
In Github API, how can one distinguish a user from an organisation?
In your instance it sounds like you should have a single inbound rule for the security group assigned to your ElastiCache Redis cluster. This rule for port 6379 should specify the security group assigned to your EC2 instance(s) in the "source" field. By specifying the security group ID in the source field, instead of an IP address or IP range, you can easily scale-out your EC2 server cluster, or make modifications to your EC2 instance that might result in an IP address change, without needing to change the security group rules for your ElastiCache cluster. Note that if you do continue using IP addresses in your security group, you need to use the Private IP of the EC2 server, not the Public IP.
I want to create a security group for AWS Elasticache (Redis). As far as i see, i have 2 options: Either open a Custom TCP connection on port 6379, and define the IP addresses what can reach Redis as a source. Or, what currently works: I Open the 6379 port to anywhere (so that my EC2 instance can connect to it), and secure the components before the EC2. What is the best approach here?
AWS Redis Security group example
The code says:// if there are any entries in numlist if (numlist.Any()) { // find the first entry whose Number matches the request, // or if not found, return the default for the numlist's // type, which is null according to the warning numlist.FirstOrDefault(c => c.Number == request.Number) // set the ValCount property of the result to request.Count .ValCount = request.Count; }The problem is thatFirstOrDefault(predicate)returns a default value if none of the elements in the source collection matched the predicate.In other words: it's not guaranteed that there's any entry innumlistthat hasNumber == request.Number, in which caseFirstOrDefault()will returnnull.You can't assignValCountonnull, that will throw a NullReferenceException.Furthermore, combiningAny()andFirstOrDefault()is superfluous in this case. You can refactor the code like this:var requestedItem = numlist.FirstOrDefault(c => c.Number == request.Number); if (requestedItem == null) { throw new ArgumentException($"Cannot find item with number '{request.Number}'"); } requestedItem.ValCount = request.Count;Guarding execution (and at the same time pleasing the static analysis), guaranteeing that a situation which you think might never happen, will never happen.
c# codeif (numlist.Any()) { numlist.FirstOrDefault(c => c.Number == request.Number).ValCount = request.Count; }Sonar Cube throws bug message saying as 'numlist.FirstOrDefault(c => c.Number == request.Number)' is null on at least one execution path.I have tried to put nullable ? like this -> numlist? but it doesn't works.Can you please assist how to resolve this issue
Sonar Cube throws bug saying as "is null on at least one execution path."
i found that code on source, seems like pptp connection/// Use given credentials to connect VPN (ikev2-eap). /// This will create a background VPN service. static Future<Null> simpleConnect( String address, String username, String password) async { await _channel.invokeMethod('connect', {'address': address, 'username': username, 'password': password}); }
I tried using this package to make VPN connection app but it dose not support the connection type like (L2TP or PPTP)https://pub.dev/packages/flutter_vpn
Is it possible to make Vpn app using flutter and dart
The Spring Boot doc provides informations on how to configure the server:https://docs.spring.io/spring-boot/docs/current/reference/html/howto-embedded-servlet-containers.html#howto-configure-sslFor configuring the client RestTemplate see here (4. The Spring RestTemplate with SSL)http://www.baeldung.com/httpclient-ssl
I made two apps for client and server withRestTemplateRestController.Needed to encrypt API with self-signed certificate, 'RestController' on server side should answer only to signed requests.Is it possible with Spring BootRestTemplate/RestController?how to do iton client sideon server side
Spring Boot - client server REST API with self-signed certificate
Create a new branch, commit, then create a PR from that new branch. I'd suggest reverting to the HEAD of the upstream repo, not the head of your other patch.
I have forked an open source repository, written thousands of lines of code on my fork and created a pull request on the original project.In the meantime I have fixed another bug totally unrelated to my first pull request. I'd like to create a second pull request just for this bug which does not contain any of the work in my first pull request.Can it be done without me forking the repository again?
GitHub - how to create two pull requests from one fork
As OpenStack uses VXLAN tunnels for communication. VXLAN tunnel has 50 bytes reserved for the headers. Suppose host machine NIC has MTU of 1500 then OpenStack VMs will have MTU of 1450. So ideally docker bridge should have MTU size <= 1450.
I have a docker installed on a openstack VM. What should be the exact MTU size for my docker bridge network so that containers can able to communicate outside. Most of the post are suggesting to set it to 1400. I am looking, what should be the exact size with good explanation.
What should be the ideal MTU size for docker bridge on a openstack VM?
Is that memory actually being used or is it cached? SSH into your beanstalk instance and use thefreecommand to determine this.This articlehas a good breakdown of how to determine whether your RAM is actually used or cached and what it means.
I created the simplest Flask app I could imagine:import flask from flask import Flask application = Flask(__name__) @application.route('/') def index(): return flask.jsonify(ok=True)I deployed this app on 1/26 to Elastic Beanstalk. It has served 0 requests since deployment. Here is a graph of the memory usage, usingAmazon's memory monitoring scripts:You can see the little dip where (I assume) garbage collection happened on 1/29. But what on earth is allocating so much memory? If this is normal, how should I be monitoring memory so I can actually figure out if my (real) application has a memory leak? Is this Flask's fault, Python's fault, AWS's fault, ...something else?Edited to add:I switched over to using mod_wsgi this aftenoon, but it didn't seem to have any effect. Updated graph (the dips are deploying new versions, it took a few tries to get the config right):Output offree -m:total used free shared buffers cached Mem: 532 501 31 0 81 37 -/+ buffers/cache: 381 150 Swap: 0 0 0
Why is flask using all of my memory?
To runpipfor python3 usepip3, notpip.
I am getting the error using pip in my docker image.FROM ubuntu:18.04 RUN apt-get update && apt-get install -y \ software-properties-common RUN add-apt-repository universe RUN apt-get install -y \ python3.6 \ python3-pip ENV PYTHONUNBUFFERED 1 RUN mkdir /api WORKDIR /api COPY . /api/ RUN pip install pipenv RUN ls RUN pipenv syncI installed python 3.6 and pip3 but gettingStep 9/11 : RUN pip install pipenv ---> Running in b184de4eb28e /bin/sh: 1: pip: not found
cant install pip in ubuntu 18.04 docker /bin/sh: 1: pip: not found
First try to stop the SonarStart.bat by using Ctrl+c as suggested , and then try to open localhost:9000 ( or whichever port you configured sonar server).If it is still opening then go to task manager and search forwrapper.exeservice and stop the service, if no service or app is found then goto:Task manager>Details> and stop all java.exe process.Note: If you running many java applications, right-click the java.exe and choose goto service, and stop only those java.exe that belongs to AppX deployment.ShareFollowansweredJul 14, 2019 at 21:26PDHidePDHide18.9k22 gold badges3737 silver badges5050 bronze badgesAdd a comment|
I use sonarqube 4.3 and I can't find a script to stop sonar in windowsx86-64.It's awkward to haveStartSonar.batand nothing to stop.When I use it on in linux-x86-64 I can use./sonar.sh stop.I saw that there was aStartNTService.batand aStoptNTService.batbut i don't want to install sonar as a service.
Stop sonar on window 64
You are looking forVectorized Environments. They will allow parallel interaction with your environments.
I am try to run DRL on a low speed environment and sequential learning is making me upset. is there anyway to speed up the learning process? I tried some offline deep reinforcement learning but I still need higher speed (if possible).
parallelized deep reinforcement learning
The error message really say what is the problem, the all CN or alt names domains in the certificate do no match the current domain you are trying to install the certificate.In another words, your cloudflare domain is not in the certificate you are trying to install. Recheck your certificate.portecleandkeystore-explorer(kse) are good tools to check and manage certificate
I have generated certificate on digicert.com and downloaded the certificate. When i am uploading csr and private key to cloudflare SSL configuration. It showing wierd issue:'Unable to find a host name belonging to the zone on the certificate'
Cloudflare SSL faile to upload
$argv[0]always contains the name of the script file, as it passed to the PHP binary. As per screenshot,$argv[1]is '33' and$argv[2]is 'On'. You can easily check with:echo $argv[1];Or you can list all arguments as an array by:var_dump($argv);Basically, the following task is added to crontab, when scheduled via Plesk:/usr/bin/php5 -f '/test.php' -- '33' 'On'Iftest.phpcontains mentioned commands, the result of its' execution will be the following:# cat /test.php <?php echo "The first argument is $argv[1]\n"; echo "Here the full list of arguments:\n"; var_dump($argv); ?> # /usr/bin/php5 -f '/test.php' -- '33' 'On' The first argument is 33 Here the full list of arguments: array(3) { [0]=> string(6) "/test.php" [1]=> string(2) "33" [2]=> string(2) "On" }
I have created a Cron Job/Scheduled Task in PLESK 12 which I am passing the arguments 33 and On through using the arguments box. I am struggling to pick these up in the PHP document on the end of the cron job.In the PHP document I have tried a number of things including $arg[0] and $argv[0]$arg returned as being an undefined variable whilst $argv[0] does not error but also does not pass the arguments through successfully as the desired changed has not been made.I have checked to ensure the PHP script is working and it works fine when the arguments are hard coded into the program but I want this to be dynamic.<?PHP include_once('xxx/xxx/xxx/db.php'); include('xxx/xxx/xxx/xxx/db.php'); $query = "UPDATE SQLCommand SET argument1 = '$argv[1]' WHERE argument2= $argv[0]"; $result = mysqli_query($connection,$query);Can anyone explain why these are still not passing the arguments through.Thanks
How to get arguments from PLESK Cron jobs
The question I have is, what if the work I'm doing in my new feature branch depends on the work I just completed in my previous feature branch? Should I be initially branching my new feature branch from my as-of-yet unmerged feature branch instead of the develop branch? The way you're describing it, yes, you would. However, I'd be concerned about potentially going down the path of working off of "unapproved / unreviewed" work, as if your code review results in significant changes, you may find yourself redoing a lot of work. If I've already created my new feature branch from the develop branch, is getting the changes I'm missing from the unmerged branch as simple as doing a git merge [unmerged-branch] within my new branch? Yep. Should be. :)
My company has a Git workflow that looks something like this: Create feature branch from pristine branch (we use a base branch called "develop", but you can think of this as "master") Do the work you need to do in this feature branch, and commit your changes Occasionally, rebase your feature branch with the develop branch When the work in your feature branch is complete, commit and push to the remote feature branch on GitHub Create a pull request to merge your feature branch into the develop branch, which gets code reviewed by another developer Once the code review is completed, the feature branch is merged into the develop branch, and the feature branch is deleted This works when you're dealing with a serial workflow, but when you've pushed your changes from your feature branch and are waiting on the other developer to review and merge your changes, you probably want to take on another piece of work, which means repeating the above process. In our case, we're currently creating our feature branches from the develop branch, so the work I just completed isn't yet available (it's still in limbo, waiting to be merged into the develop branch by another developer). The question I have is, what if the work I'm doing in my new feature branch depends on the work I just completed in my previous feature branch? Should I be initially branching my new feature branch from my as-of-yet unmerged feature branch instead of the develop branch? If I've already created my new feature branch from the develop branch, is getting the changes I'm missing from the unmerged branch as simple as doing a git merge [unmerged-branch] within my new branch? Hopefully this explanation -- as well as the workflow itself! -- makes sense. I've gotten myself into some weird situations where I'm unclear the state of my code, so I'm trying to figure out a workflow that gives me the flexibility to merge in changes from other feature branches while still getting upstream changes at any time.
Merge non-merged feature branch into another feature branch with Git
9 The FAQ just says that cpython itself does not actively deallocate all the memory it has acquired when it terminates If you run cpython on a any normal server/desktop OS that releases all memory of a process when it exits, then there's no issue with memory leaks. The OS takes care of deallocating all memory when the process has exited. The FAQ is more to inform you that cpython does not call free() or similar on all the memory it has allocated with malloc() or similar. This can have consequences if you run cpython on an OS that does not release all memory acquired by the process when the process exits (These operating systems exists, it is in particular the case with many embedded kernels). And if you run cpython under a memory profiler/leak detector, that detector might report the memory not free()'d as leaks. Share Follow edited Apr 12, 2018 at 20:45 answered Apr 12, 2018 at 20:40 nosnos 226k5858 gold badges422422 silver badges511511 bronze badges Add a comment  | 
I want to be clear, I am not seeing the behavior described by this question. Instead my question is about the question itself: The python 3 official FAQ says this verbatim: Why isn't all memory freed when CPython exits? And provides this answer: Objects referenced from the global namespaces of Python modules are not always deallocated when Python exits. This may happen if there are circular references. There are also certain bits of memory that are allocated by the C library that are impossible to free (e.g. a tool like Purify will complain about these). Python is, however, aggressive about cleaning up memory on exit and does try to destroy every single object. If you want to force Python to delete certain things on deallocation use the atexit module to run a function that will force those deletions. This, assuming a managed memory operating system (Linux, Mac, Windows, GNU, BSD, Solaris...), sounds like total nonsense. On a program exiting (be it Python or anything else) any memory it requested from the OS is freed (as the OS has control of the virtual page tables, etc, etc). The program doesn't have to de-allocate or de-construct anything (something programs used to have to do, as highlighted by the time someone's use of cp got bottlenecked by a hash table deconstruction), but I don't think any OS' Python 3 supports puts this requirement on programs. Does this make sense in some context I'm not aware of? What is this referring to?
Why isn't all memory freed when CPython exits?
Ticketcreated. Meanwhile, you can either:deactivate the rule completelymark the issues flagged as won't fixset an exclusionon test filesfor all issues
I have set SonarQube to manage code quality on my project, but I have this issue: On tests projects I don't want to run this rule:Source files should have a sufficient density of comment lines common-cs:InsufficientCommentDensityHow can I do this? I tried to add in Issues-> Ignore Issues in Blocks: Regular Expression for start of the block with the patternusing NUnit.Framework;But no success, the rule is still appearing on test files.
Sonarqube restriction of a rule
This is the way ContainerOverrides work, contrary to what it should work like. You have two options to solve this: Create a Lambda Function that starts the State Machine. Invoke the Lambda Function when you want to invoke the State Machine. That Lambda function will call the describe_task_definition ECS SDK function to get the complete details of your task definition and while calling start_execution function for step functions, pass all the content of Parameters along with the new/updated environment variables. The Lambda function can be scheduled or run on demand. List all the Environment Variables in the State Machine. Just like you mentioned the new variable, you may mention all the previous variables as well. (It has a disadvantage of redundancy) You may use SSM parameter store for all your variables and then mention all the paths in your State Machine Task definition as well. First option will need some custom implementation, but will save you from manual configurations.
I'm using an AWS Step Function to invoke a Fargate container. The ECS Task Definition has several environment variables defined, some with fixed values and some coming from Systems Manager Parameter Store. The State Machine adds one additional environment variable using ContainerOverrides. Unfortunately this seems to replace, not add to, the environment variables specified within the task definition. If I don't define any environment variables in the step definition, then those from the task definition exist at runtime. If I define even one variable at the step definition, then only those from the step definition exist at runtime. How can I get Fargate/ECS/Step Functions to merge the environment variable instead of replacing all? State Machine { "Comment": "Sample State Machine", "StartAt": "Prerequisites", "States": { "Prerequisites": { "Type": "Task", "Resource": "arn:aws:states:::ecs:runTask.sync", "Parameters": { "Cluster": "arn:aws:ecs:us-west-2:1232123123:cluster/step-function-executor", "TaskDefinition": "step-function-generic-script-executor", "LaunchType":"FARGATE", "NetworkConfiguration": { "AwsvpcConfiguration" : { "AssignPublicIp" : "DISABLED", "SecurityGroups" : [ "sg-123", "sg-456" ], "Subnets" : [ "subnet-123" , "subnet-456" ] } }, "Overrides": { "ContainerOverrides": [ { "Name": "step-function-generic-script-container", "Environment": [ { "Name": "STEP_SCRIPT_NAME", "Value": "db-daily-backup-01-prereq" } ] } ] } }, "End": true } } } Task Definition
AWS Step Function ContainerOverrides clearing out already defined environment variables
You shouldn't be deciding whether or not to use static fields/methods based on memory consumption (which likely won't be altered much). Instead, you should go with what produces cleaner, more testable code.Staticmethodsare okay (IMO) if you don't need any kind of polymorphic behaviour, and if the method doesn't logically act on an instance of the type. However, if you'vealsogot static variables involved, that's more of an issue. Static variables - other than constants - can make code much harder to test, reuse, and handle correctly in multiple threads.It sounds like you probablyshouldbe using instance variables and methods. Just make yourMainmethod create an instance of the class, and it can use that instance to create delegates to pass to the timer. It's hard to be much more precise than that without knowing more about what you're doing, but it does sound like you're using statics for immediate convenience rather than because it's the right thing to do, which is always a worry.ShareFollowansweredJun 14, 2012 at 6:11Jon SkeetJon Skeet1.5m876876 gold badges9.2k9.2k silver badges9.2k9.2k bronze badges2Thanks for your answer. I have put snippet of my code in my updated question.–Arun RanaJun 14, 2012 at 6:39@ArunRana: Okay, that certainly looks to me like it shouldn't be static.–Jon SkeetJun 14, 2012 at 7:08Add a comment|
I have created a console application in C# and there ismainmethod (static) and my requirement is to initialize 2 timers and handles 2 methods respectively which will be called periodically to do some task. Now I have taken all other methods/variables static because that are calling from timer handler events (which are static due to calling it from main).Now i would like to know for above scenario how memory is going to be consumed if this console running for long time? if i would like to apply oops concept then do i need make all methods/variables non static and access that by creating object of class? in this case how memory going to be consume?Update:Following is snippet of my codepublic class Program { readonly static Timer timer = new Timer(); static DateTime currentDateTime; //other static variables //----- static void Main() { timer.Interval = 1000 * 5; timer.AutoReset = true; timer.Enabled = true; timer.Elapsed += new ElapsedEventHandler(timer_Elapsed); timer.Start(); //2nd timer //----- System.Console.ReadKey(); timer.Stop(); } static void timer_Elapsed(object sender, ElapsedEventArgs e) { currentDateTime = DateTime.UtcNow; PushData(); } private static void PushData() { //Code to push data } }
Should I go with static methods or non static methods?
7 I faced the same issue. The following fixed it for me: Change your Amazon email address on www.amazon.com - You can use the same email address by using this trick. Change [email protected] to [email protected] Use the lost password recovery on the AWS login site to recover the password for the former email address (i.e. [email protected]). Use the new password to login on the AWS console with the former email address ([email protected]) Share Improve this answer Follow answered Jul 19, 2019 at 1:57 user1211030user1211030 2,83022 gold badges2121 silver badges2222 bronze badges Add a comment  | 
Closed. This question is not about programming or software development. It is not currently accepting answers. This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered. Closed 3 months ago. Improve this question Issue: I am trying to sign in as a root user for my account from the AWS portal, but now, after I adding my password I keep getting redirected to: https://portal.aws.amazon.com/billing/signup?redirect_url=https%3A%2F%2Faws.amazon.com%2Fregistration-confirmation#/start, no matter what. It was working fine till today, first time I've encountered this redirect. Main Browser : Chrome - Version 62.0.3202.94 Based on what on similar cases found on google I tried the following and still did not manage to sort out the issue: -Chrome: deleted all the cookies and cache data -Chrome: restarted the device and also the browser -Chrome: used incognito mode -Firefox (an aws member said is the most suitable browser): tried normal and private mode -Firefox: deleted all the cookies and cache data -Edge: tried normal and private mode Someone suggested already to make use of S3Browser, it does not seem as a efficient solution just to access the AWS Portal a few times a week so I am keeping it as the last resort. If anyone experienced the same issue or has any more suggestions/ideas would greatly appreciate some help. Thanks in advance.
AWS Sign In Loop - Can't Access the Portal [closed]
If you're talking about gigabytes of data, you might consider loading and plotting the data points in batches, then layering the image data of each rendered plot over the previous one. Here is a quick example, with comments inline:import Image import matplotlib.pyplot as plt import numpy N = 20 size = 4 x_data = y_data = range(N) fig = plt.figure() prev = None for n in range(0, N, size): # clear figure plt.clf() # set axes background transparent for plots n > 0 if n: fig.patch.set_alpha(0.0) axes = plt.axes() axes.patch.set_alpha(0.0) plt.axis([0, N, 0, N]) # here you'd read the next x/y values from disk into memory and plot # them. simulated by grabbing batches from the arrays. x = x_data[n:n+size] y = y_data[n:n+size] ax = plt.plot(x, y, 'ro') del x, y # render the points plt.draw() # now composite the current image over the previous image w, h = fig.canvas.get_width_height() buf = numpy.fromstring(fig.canvas.tostring_argb(), dtype=numpy.uint8) buf.shape = (w, h, 4) # roll alpha channel to create RGBA buf = numpy.roll(buf, 3, axis=2) w, h, _ = buf.shape img = Image.fromstring("RGBA", (w, h), buf.tostring()) if prev: # overlay current plot on previous one prev.paste(img) del prev prev = img # save the final image prev.save('plot.png')Output:
I'm doing a rather large PyPlot (Python matplotlib) (600000 values, each 32bit). Practically I guess I could simply do something like this:import matplotlib.pyplot as plt plt.plot([1,2,3,4], [1,4,9,16], 'ro') plt.axis([0, 6, 0, 20])Two arrays, both allocated in memory. However I'll have to plot files, which contain several Gigabyte of those information sooner or later.How do I avoid passing two arrays into theplt.plot()?I still need a complete plot however. So just an Iterator and passing the values line by line can't be done I suppose.
Large PyPlot - avoid memory allocation
Any data set that is defined and doesn't change is best!Memory mapped files generally win over anthing else - most OSs will cache the accesses in RAM anyway. And the performance will be predictable, you don't fall off a cliff when you start to swap.
I have a service that is responsible for collecting a constantly updating stream of data off the network. The intent is that the entire data set must be available for use (read only) at any time. This means that the newest data message that arrives to the oldest should be accessible to client code.The current plan is to use a memory mapped file on Windows. Primarily because the data set is enormous, spanning tens of GiB. There is no way to know which part of the data will be needed, but when its needed, the client might need to jump around at will.Memory mapped files fit the bill. However I have seen it said (written) that they are best for data sets that are already defined, and not constantly changing. Is this true? Can the scenario that I described above work reasonably well with memory mapped files?Or am I better off keeping a memory mapped file for all the data up to some number of MB of recent data, so that the memory mapped file holds almost 99% of the history of the incoming data, but I store the most recent, say 100MB in a separate memory buffer. Every time this buffer becomes full, I move it to the memory mapped file and then clear it.
Are memory mapped files bad for constantly changing data?
The setup is different for domains like example.com and sub-domains like blog.example.com. In case of a sub-domain: blog.example.com Go to Domains | Manage Domains in your webpanel Locate blog.example.com, click Delete in the Actions column Wait 10 minutes, and then click the DNS link below example.com Add a CNAME record: Name = blog Type = CNAME Value = yourusername.github.io. (yes there is a . at the end!) In case of a domain: example.com Go to Domains | Manage Domains in your webpanel Locate example.com, click Edit in the Actions column and switch to DNS only hosting (it's at the bottom) Go back to Domains | Manage Domains in your webpanel Click the DNS link below blog.example.com0 Add an blog.example.com1 record: Name = (blank, nothing) Type = blog.example.com2 Value = blog.example.com3 (GitHub, from this page) Add a blog.example.com4 record: Name = blog.example.com5 Type = blog.example.com6 Value = blog.example.com7 (yes there is a blog.example.com8 at the end!) (Yes, you need both the blog.example.com9 and blog.example.com0 records in this case.) Btw, the only reason I know this is because I did the same thing last weekend. I was quite lost, but the helpful support guys helped me half way, and I figured out the rest. This procedure works for me, I needed both cases so I tested both.
I created a Jekyll-powered blog and am hosting it with GitHub Pages. Now, I want to set up a subdomain (blog.example.com), but can't make it work. I have added a CNAME file with the text: blog.example.com. And I have added two A records in my Dreamhost account for the subdomain, both pointing to 204.232.175.78, provided by GitHub. Any idea about what the missing part is, or if I'm doing something incorrectly?
Set up custom subdomain for Jekyll Blog hosted in Github Pages
3 You're forward everything to PHP FPM, meanwhile, by default in PHP-FPM process config file, it only allows .php file to be served. You can check in /usr/local/etc/php-fpm.d/www.conf inside php-fpm container, and search for security.limit_extensions, you'll see. So here you have 2 solutions Solution 1: map your project source into container where you're running Nginx, like this: # docker-compose.yml webserver: image: nginx:1.17-alpine restart: unless-stopped ports: - "8000:80" volumes: - ./:/var/www/html By doing this Nginx can easily find your static files and serve them. Note that /var/www/html is your root project path which you defined in your Nginx config file. For example, Nginx config file for Laravel project usually looks like: server { listen 80; index index.php index.html; root /var/www/html/public; ... Solution 2: add .css, .js to PHP-FPM process config file, with this solution, you'll override PHP-FPM config file and add your static files to list file extensions that PHP-FPM allows. Check my demo here. This solution won't require you to map your project into Nginx container. But in reality it's not good for production like solution 1 Share Improve this answer Follow edited Nov 11, 2020 at 15:36 answered Nov 11, 2020 at 15:30 Duc Trung MaiDuc Trung Mai 2,32911 gold badge2727 silver badges2626 bronze badges 1 1 Working solution for this :) The only drawback for this is that need to define all the extensions which required from your web server. Will be even better if it can do the auto forwarding for content-type. – Keith Yeoh Nov 12, 2020 at 6:20 Add a comment  | 
I am running NGINX, PHP-FPM and DB in separate container. Inside PHP-FPM is mounting a Laravel project from my local machine. I've successfully forward the PHP request to PHP-FPM container (port 9000) while accessing 127.0.0.1:8000. Unfortunately, the requests with assets extension (e.g. .css, .js) has ran into 403 forbidden. Following are my NGINX configuration script. server { listen 80; add_header X-Frame-Options "SAMEORIGIN"; add_header X-XSS-Protection "1; mode=block"; add_header X-Content-Type-Options "nosniff"; index index.php; charset utf-8; location / { try_files $uri $uri/ /index.php?$query_string; } location = /favicon.ico { access_log off; log_not_found off; } location = /robots.txt { access_log off; log_not_found off; } error_page 404 /index.php; location ~ \.php$ { fastcgi_pass fpm:9000; fastcgi_param SCRIPT_FILENAME /app/public$fastcgi_script_name; fastcgi_index index.php; include fastcgi_params; } location ~* \.(css|js|gif|ico|jpeg|jpg|png)$ { fastcgi_pass fpm:9000; fastcgi_param SCRIPT_FILENAME /app/public$fastcgi_script_name; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_index index.php; include fastcgi_params; } } The request and response header for app.css file. Not sure if anyone has ran into similar problems and have solution for this?
Docker - NGINX Container forward to PHP-FPM Container
AWS Lambda handles synchronous functions and asynchronous functions as well. async means two things: The function returns a Promise You are able to use await inside it AWS Lambda happens to understand Promises as return value, thats why async functions work as well. So if you need await just go for async. You could also not declare the function as async and return a Promise (or chain of Promises)
I'm creating an AWS SAM application using Node.js Lambda functions. The default template has an async handler function: exports.lambdaHandler = async (event, context) => { // ... return { statusCode: 200, body: JSON.stringify({ hello: "world" }) }; }; Is there any benefit to having this handler function be async vs sync, since my understanding is each time a Lambda function is invoked it runs separately from other instances?
AWS Lambda: Is there a benefit to using an async handler function for the Node runtime?
If you have to install your library to test it, you're doing something wrong. :) You are absolutely right that that is not a nice way to work. Here's a better way: Write tests—lots of tests—that you can run on your library to make sure it works. Since you're using PHP, use PHPUnit for this part. If you do find a bug while using the library in one of your other projects, write a test that exercises that bug. Then you can code —> test —> repeat in your library until the tests pass again.
Let's say I have some pet-projects on Laravel (or any other PHP project with Composer). They have some similar functionality and I want to extract in into a composer package hosted on GitHub. What are my actions? I see this approach: Create a new project (e.g. in PhpStorm). Write an extension with tests (migrate from one of the projects). Create a GitHub repository, push the code there. Add it to packagist. Composer require the package on all projects and install it properly. This is ok. But what if I need to add some new feature or fix a bug? How do I do it properly? It's convenient to try it directly on some of the projects, which has the extension installed, but it's strange to edit "vendor" directory, and even if the files are edited, how to push them back to the repository? It's also awkward to edit the code in the separate PHPStorm project for the repository "blindly" and push it each time, the composer update from the project to see how it works. Any other convenient flow? Thanks.
How to maintain a decoupled git component?
Fromthe docs:The database time zone [DBTIMEZONE] is relevant only for TIMESTAMP WITH LOCAL TIME ZONE columns. Oracle recommends that you set the database time zone to UTC (0:00)...SYSDATE/SYSTIMESTAMPwill return the time in the database server's OS timezone. Selecting aTIMESTAMP WITH LOCAL TIME ZONEdatatype will return the time in your session's timezone (ie,SESSIONTIMEZONE).select CAST(systimestamp AS timestamp(0) with local time zone) as local_time, systimestamp as server_time from dual;DBTIMEZONEis only used as the base timezone stored inTIMESTAMP WITH LOCAL TIME ZONEcolumns - which you never see, because when you select from one of those columns it gets translated into your session timezone.See this similar question for a very detailed answer.
I ranselect SYSDATE from dual;Output:SYSDATE | -------------------| 2019-10-09 08:55:29|Then I ran,SELECT DBTIMEZONE FROM DUAL;Output:DBTIMEZONE| ----------| +00:00 |In the first output, time is in EST and 2nd output suggests timezone is UTC.How do I check oracle server timezone via SQL query?
Oracle server timezone using SQL query
If you're running docker 1.9 or 1.10, and use the 2.0 format for yourdocker-compose.yml, you can directly access other services through either their "service" name, or "container" name. See my answer on this question, which has a basic example to illustrate this;https://stackoverflow.com/a/36245209/1811501Because the connection between services goes through the private container-container network, you don't need to use the randomly assigned ports, so if a service publishes/exposes port 80, you can simply connect through port 80ShareFolloweditedMay 23, 2017 at 12:19CommunityBot111 silver badgeansweredMar 28, 2016 at 20:23thaJeztahthaJeztah28.3k99 gold badges7474 silver badges9292 bronze badgesAdd a comment|
I am learning how to use Docker, and I am in a process of setting up a simple app with Frontend and Backend using Centos+PHP+MySQL.I have my machine: "example"In machine "example" i have configured 2 docker containers:frontend: build: ./frontend volumes: - ./frontend:/var/www/html - ./infrastructure/logs/frontend/httpd:/var/logs/httpd ports: - "80" links: - api api: build: ./api volumes: - ./api:/var/www/html - ./infrastructure/logs/api/httpd:/var/logs/httpd ports: - "80" links: - mysql:container_mysqlThe issue I am facing is when I access the docker container, I need to specify a port number for either FRONTEND (32771) or BACKEND (32772).Is this normal or is there a way to create hostnames for the API and Frontend of the application?How does this work on deployment to AWS?Thanks in advance.
Docker example for frontend and backend application
How is "least recently used" parameter determined? I hope that a dataframe, without any reference or evaluation strategy attached to it, qualifies as unused - am I correct? Results are cached on spark executors. A single executor runs multiple tasks and could have multiple caches in its memory at a given point in time. A single executor caches are ranked based on when it is asked. Cache just asked in some computation will have rank 1 always, and others are pushed down. Eventually when available space is full, cache with last rank is dropped to make space for new cache. Does a spark dataframe, having no reference and evaluation strategy attached to it, get selected for garbage collection as well? Or does a spark dataframe never get garbage collected? Dataframe is an execution expression and unless an action is called, no computation is materialised. Moreover, everything will be cleared once the executor is done with computation for that task. Only when dataframe is cached (before calling action), results are kept aside in executor memory for further use. And these result caches are cleared based on LRU. Based on the answer to the above two queries, is the above strategy correct? Your example seems like transformation are done in sequence and reference for previous dataframe is not used further (no idea why you are using cache). If multiple executions are done by same executor, it is possible that some results are dropped and when asked they will be re-computed again. N.B. - Nothing is executed unless a spark action is called. Transformations are chained and optimised by spark engine when an action is called.
I have the following strategy to change a dataframe df. df = T1(df) df.cache() df = T2(df) df.cache() . . . df = Tn(df) df.cache() Here T1, T2, ..., Tn are n transformations that return spark dataframes. Repeated caching is used because df has to pass through a lot of transformations and used mutiple times in between; without caching lazy evaluation of the transformations might make using df in between very slow. What I am worried about is that the n dataframes that are cached one by one will gradually consume the RAM. I read that spark automatically un-caches "least recently used" items. Based on this I have the following queries - How is "least recently used" parameter determined? I hope that a dataframe, without any reference or evaluation strategy attached to it, qualifies as unused - am I correct? Does a spark dataframe, having no reference and evaluation strategy attached to it, get selected for garbage collection as well? Or does a spark dataframe never get garbage collected? Based on the answer to the above two queries, is the above strategy correct?
Does spark automatically un-cache and delete unused dataframes?
if you're asking how to get the Amazon Linux-based Dockerfile to install curl without prompting you, you can add -y to yum update://Dockerfile for Amazon linux FROM nginx RUN yum -y update && yum install -y curl
The following Dockerfile few lines suppose to install curl inside the nginx custom image to run under ubuntu.The second group of code is an attempt to convert the task to do the same but to run on Amazon Linux.Any suggestion as to what would be the yum equivalent to the rest of the apt-get command?-no-install-recommends curl && rm -rf /var/lib/apt/lists/*//Dockerfile for ubuntu FROM nginx RUN apt-get update && apt-get install -y -no-install-recommends curl \ && rm -rf /var/lib/apt/lists/*//Dockerfile for Amazon linux FROM nginx RUN yum update && yum install -y curl
Installing curl inside nginx docker image
Built a docker image from officialtensorflow servingdocker fileThen inside docker image./usr/local/bin/tensorflow_model_server --port=9000 --model_config_file=/serving/models.confhere/serving/models.confis a similar file as yours.
How can I use multipletensorflowmodels? I use docker container.model_config_list: { config: { name: "model1", base_path: "/tmp/model", model_platform: "tensorflow" }, config: { name: "model2", base_path: "/tmp/model2", model_platform: "tensorflow" } }
How can I use tensorflow serving for multiple models
0 Crontab lets you execute 1 scheduled command/script at a time. Piping the output of your script to Grep command won't work. Furthermore, crontab by default redirects output to dev/null, therefore you won't see the output unless you save it to a file. I suggest something like this: Edit your script to redirect it's output to a file with your grep command. For example by adding DATE=$(date +"%m_%d_%Y") some command | grep -v Warning >> /tmp/$DATE.log # Here Edit your Cron job to execute the script every day like you did, removing everything after the pipe: 45 5 * * * /home/username/barc/backupsql.sh In order to monitor the output you could use tail command as follows: tail -f /tmp/$DATE.log Share Improve this answer Follow edited Oct 24, 2022 at 8:47 answered Oct 24, 2022 at 8:43 SaisukSaisuk 522 bronze badges Add a comment  | 
I want not to save the logs that are "warning" in the log file that the crontab creates, I only want the "error" messages, does anyone know how I can exclude these messages? I have tried doing a grep -v but it doesn't work: 45 5 * * * /home/username/barc/backupsql.sh 2>&1 | grep -v 'Warning: Using a password on the command line interface can be insecure.' Thanks in advance for anyone trying to help me.
I want not to save the logs that are "warning" in the log crontab
2 getElementsByClassName returns a list of DOM nodes. So you want to do this: document.getElementsByClassName('bg-gray-light ml-1')[0].click() Share Improve this answer Follow answered Sep 19, 2019 at 15:11 The CoprolalThe Coprolal 93688 silver badges88 bronze badges 1 as soon as it's become submited it change node value, is there a way to find out how to handle this , – Lisa123456 Sep 19, 2019 at 15:19 Add a comment  | 
I'm working with github issue i want to create a button that will fill the comment textarea and submit. So far so good, i need to bind the function in a button, at the moment i managed to fill the textarea but couldn't submit the comment Image clickToRespond=()=>{ document.getElementById('new_comment_field').value='This is a test' document.getElementsByClassName('bg-gray-light ml-1').click() } Solution : document.getElementsByClassName('bg-gray-light ml-1')[0].childNodes[1].click()
Click Button that fills an textArea & submits it
I have faced a similar problem and found a workaround insearch after APIwhich is not affected by that limit of 10k elements and thus can be useful in cases when you know you might have more than that and still want to render the total elements that are there. With the ability of relatively easy fetches without the hard restrictions on filters or search at the cost of a not-too-handy pagination, that can still be done somewhat easy.It is tricky to use, because:All indexes that you are searching in should be sorted by the unique field across all indexes"from" query parameter(of a page) should be set to 0send a "search after" value from last element on the the previous page of that fieldAnd you won't operate numbers in pages anymore, just the last elements of the pages, and the size of those pages that you want to see after that elementIt is not really an answer on how to jump to the last one but that is a solution to not changing the result window size, and or limiting resulting documents to 10k only
I am using search query to retrieve documents from elastic search which returns me nearly 50k documents. I have a UI which renders 100 documents per page and have a button to jump to last page. Whenever I try to hit on last page I get below errorResult window is too largeI don't wish to increase theindex.max_result_window = 10000
How to Jump to last page in elastic search when search query returns more than 10000 documents
You can't edit built-in profiles. Instead, you'll have to create a new profile, and then you'll be able to edit the rules to your heart's content. I suggest you initialize your new profile either by copying the rules from the built-in profile of your choice, or by inheriting from that profile. Note that choosing the latter means your profile can (and probably will) be updated by upgrading your analyzers; each new version of SonarJava, for instance, implements new rules and many of them are added to the Sonar way profile.
I'm a big fan of SonarQube as a developer. This time though I need to do admin work since I need to configure it from a fresh install. I see this rule in SonarQube "Methods should not have too many lines" but I don't see that it belongs to any of the default profiles ("FindBugs+FB-Contrib", "Sonar Way"). I think that's the reason I don't see any rule violations of this type from any of the projects. I thought this should be part of a common default profile since this is a pretty common violation. How can I add this rule to the profile? There are other rules that I need to add which I expected also to be in the default/available profiles already.
SonarQube rules are not getting detected
Have a look here:https://github.com/kubernetes/ingress-nginx/blob/master/cmd/nginx/flags.go#L133-L137It seems you either haven't got the full chain like you expected, or you're missing the "Authority Information Access" X.509 v3extension"
Recently, I got a certificate from Let's Encrypt with the Must Staple extension on it, requiring a OCSP response to be sent with the certificate. I am using the kubernetes ingress-nginx(on Google Cloud) controller for TLS. The certificate is working great on Chrome(since it doesn't use OCSP), but it's failing on all other browsers because a OCSP response is not being stapled to it. The certificate I am using for the public key is the full certificate chain from Let's Encrypt. I'm not sure why nginx isn't attaching an OCSP response even though kubernetes supports OCSP.
Nginx Ingress controller and OCSP Must Staple
Use "path" parameters. See "Checkout multiple repos" inhttps://github.com/actions/checkout
I'm trying to implement some automation tools in my github repository, but there are some problems I'm facing to. For now, I can't understand how to get sources into the specified folder.For example, I have 2 branchesthe first one is the sources branchthe second one is the test branchNow I'm trying to clone the first branch, build the project (the project is in C++) and then clone the second one and build the second one and run the tests.Can I do the stuff using actions/checkout or I have to use another approach?Also if you have some understanding of how the actions/checkout action is implemented, please let me know. I'm very interested.
How to use actions/checkout@master to get sources into specified folder?
Is your server a domain controler ?On My DC it gives the dns name :PS C:\> [system.net.dns]::GetHostEntry("127.0.0.1") HostName Aliases AddressList -------- ------- ----------- VMESS01.SILOGIX-ESS01.local {} {fe80::7535:fadb:225a:4a2a%12, 88.191.232.219, 2002...
On a new Windows 2012 serverDns.GetHostEntry Method (IPAddress)returns the locally specified host name but not the name known to DNS for the IP address. The IP address is the new server's.Running nslookup on the same IP returns the correct DNS name for the server.Likewise runningGetHostEntry()for 127.0.0.1 returns the local host name instead of "localhost". I don't know if this is related.I thoughtGetHostEntry()is supposed to return the name as specified in DNS. Why does it return the locally-defined host name when supplied with the local IP address?
Dns.GetHostEntry returns local host name not name known to DNS
Confirm first that you are using an SSH URL as a remote (git remote -vinside your repo)Then, as commented, add the ssh key to the ssh agent, asdocumented in GitHubfor instance.You can automate that byadding it to your ~/.bashrcTheOP Aishwary Shuklaaddsin the comments:All of this was happening because of a small typo in the config file.I suppose it is the ~/.ssh/configShareFolloweditedJan 16, 2020 at 22:15answeredJan 16, 2020 at 5:39VonCVonC1.3m539539 gold badges4.6k4.6k silver badges5.4k5.4k bronze badges8I tried performing the provided instructions, but the error still persists.–Aishwary ShuklaJan 16, 2020 at 17:46@AishwaryShukla What command did you type for creating your private/public key? What version of Git and of openSSH are you using?–VonCJan 16, 2020 at 20:36ssh-keygen -t rsa -b 4096 -C "[email protected]". I followed the instructions provided by github:help.github.com/en/github/authenticating-to-github/…–Aishwary ShuklaJan 16, 2020 at 20:46@AishwaryShukla Can you try a new key (overriding the first, so save it first):ssh-keygen -t rsa -P "" -m PEM, and register the public key to your GitHub profile.–VonCJan 16, 2020 at 20:491I solved the problem. All of this was happening because of a small typo in the config file. Thank you very much for your help.–Aishwary ShuklaJan 16, 2020 at 21:31|Show3more comments
I have a git repo. I have completed the necessary procedure to setup ssh keys locally and on the repo. But, I face a weird problem. The terminal tab from where I performed the ssh setup allows me to perform normal git operations with the repo but if I try to do it from a new terminal instance it throws the following error:fatal: Could not read from remote repository.Please make sure you have the correct access rights and the repository exists.I have tried every possible solution on stackoverflow but the problem still persists. What could be the problem?I am using a macOS.
Git refusing to perform activities with remote branch
6 When you clone a remote repository, by default your local working directory will be on the remote repository's default branch. For a long time this was the master branch, but GitHub has recently started using the name main instead of master. It sounds like your repository may have been created someplace else and then pushed to GitHub. Regardless of how you arrived at this situation, you have two branches named main and master, and it sounds like main is the default branch. After cloning the repository, you will be on the main branch. You can switch to the master branch by running: git switch master Or: git checkout master The main0 syntax is newer. The two commands are largely equivalent (main1 does more than main2). You can change the default branch of your GitHub repository by going to your repository settings. Clicking on "Branches" along the left. This will take you to the branch settings, where the first section is the "Default branch" section. Click on the icon with two arrows, then select main3 as your default branch. In the future, main4 will result in a working directory that is on the main5 branch. Share Follow edited Jul 23, 2021 at 23:30 SwissCodeMen 4,54199 gold badges2626 silver badges3535 bronze badges answered Jul 23, 2021 at 23:26 larskslarsks 292k4141 gold badges425425 silver badges430430 bronze badges Add a comment  | 
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers. This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered. Closed 2 years ago. Improve this question Beginner GitHub user here, cannot figure out how to use it. I have a repository, it has two branches, one is called main, and it has a README file, and there is another called master, with all my other files. Only a few files. I pushed them from my desktop, and when I want to clone or pull the entire repository to my laptop, it only pulls the README file, nothing else; so basically the content of the branch called main. Any help will be greatly appreciated, thanks in advance.
Why I can't clone/pull a whole repository from github, only the README file? [closed]
The problem is that your rules don't match thessl non-wwwurls, so the redirection fromhttps://example.comtohttps://www.example.comisn't happening on your server. .You can use the following generic rule to redirect your domains tohttps://www:RewriteEngine on RewriteCond %{HTTPS} off [OR] RewriteCond %{HTTP_HOST} !^www\. RewriteCond %{HTTP_HOST} ^(?:www\.)?(.+)$ RewriteRule ^.*$ https://www.%1%{REQUEST_URI} [NE,L,R=301]Make sure to clear your browser cache before testing these rules.
I'm just going crazy with my issue and hope for your help.I have one webstore with two domains linking to one same path. And webstore is choosing itself which content should be shown depends on domain.www.yogabox.de - German contentwww.yogabox.co.uk - English contentI'm trying to rewrite all kinds of yogabox.de tohttps://www.yogabox.deund the same for yogabox.co.uk tohttps://www.yogabox.co.ukHere is the result:I'm using those rules:RewriteCond %{HTTPS} off RewriteCond %{HTTP_HOST} !^www\.yogabox\.co\.uk$ RewriteCond %{HTTP_HOST} !^yogabox\.co\.uk$ #RewriteCond %{HTTP_HOST} !^www\.yogabox\.de$ RewriteRule ^(.*)$ https://www.yogabox.de/$1 [R=301,L] RewriteCond %{HTTPS} off #RewriteCond %{HTTP_HOST} ^yogabox\.co\.uk$ RewriteCond %{HTTP_HOST} !^www\.yogabox\.de$ RewriteCond %{HTTP_HOST} !^yogabox\.de$ RewriteRule ^(.*)$ https://www.yogabox.co.uk/$1 [R=301,L]Onlyhttps://yogabox.deandhttps://yogabox.co.ukare wrong. Where is the problem?I have already checked the problem with not valid certificate like hereWWW to NON WWW Urls (Remove WWW) using Apache (.htaccess)But the certificates are valid for www and without www.
htaccess rewrite for https multiple domains
I agree that the UI in this case is not very self-explanatory. You should read the documentation (e.g.Target groups for your Application Load Balancers) first to get a general understanding of the relationship between a load balancer (LB) and a target group (TG). TL;DR, the TG is not associated to an LB directly. Instead, they are associated via a listener.A listener is a process that checks for connection requests, using the protocol and port that you configure. The rules that you define for a listener determine how the load balancer routes requests to its registered targets.Source:Listeners for your Application Load BalancersTherefore, the correct steps to take after clicking the "Associate with an existing load balancer" are:Select a load balancerGo to "Listeners" tabEither edit an existing listener or add a new listenerSet rules and conditions, and attach TG
I am trying to add a load balancer to a target group.In EC2 > Target Groups, I can see my target group that I want to add an existing load balancer to.So I select "Associate with an existing load balancer" which brings me to this page.I then select the load balancer... but then what? There's no button that says "Add load balancer to target group". The "Create Load balancer" blue button is there to create a new load balancer (not what I want). It doesn't look like anything in the "Actions" dropdown is appropriate. How can I attach the selected load balancer to the target group?
AWS - Add existing load balancer to target group
we have also upgraded cluster/Node version from 1.21 to 1.22 directly from GCP which have successfully upgraded both node as well as cluster version.even after upgrading we are still getting ingresslist/apis/extensions/v1beta1/ingresseswe are going to upgrade our cluster version from 1.22 to 1.23 tomorrow will update you soon.
I'm trying to upgrade some GKE cluster from 1.21 to 1.22 and I'm getting some warnings about deprecated APIs. Am running Istio 1.12.1 version as well in my clusterOne of them is causing me some concerns:/apis/extensions/v1beta1/ingressesI was surprised to see this warning because we are up to date with our deployments. We don't use Ingresses.Further deep diving, I got the below details:➜ kubectl get --raw /apis/extensions/v1beta1/ingresses | jq Warning: extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress { "kind": "IngressList", "apiVersion": "extensions/v1beta1", "metadata": { "resourceVersion": "191638911" }, "items": [] }It seems an IngressList is that calls the old API. Tried deleting the same,➜ kubectl delete --raw /apis/extensions/v1beta1/ingresses Error from server (MethodNotAllowed): the server does not allow this method on the requested resourceNeither able to delete it, nor able to upgrade.Any suggestions would be really helpful.[Update]: My GKE cluster got updated to1.21.11-gke.1900and after that the warning messages are gone.
Deprecated API calls blocking update to GKE 1.22 - [Update]
If you use git on OS X, make sure to check: the official version of gitx the experimental version of Brotherboard: (source: brotherbard.com)
I'm really new to the whole GitHub thing so this might seem like a basic question but I can't figure it out. I have a GitHub repository set up on my machine, I've managed at some point to push the master but now I have made some changes and I want to push the entire thing again (pretty much everything changed). What I'm wondering is: How do you push an entire repository to create a new version (from version 1.0.0 to 1.0.1)? How do you push a single file for more incremental changes?
How do you push changes to GitHub on OS X 10.6?
I'm not sure why you need all that data in the URL. You should be storing things like the submission title, its date and author in a database and then refer to it with an ID. That way, your URLs will be shorter and prettier:http://www.example.org/article.php?id=1http://www.example.org/article/1/You can accomplish this with a simple RewriteRule in your.htaccessfile, like so:RewriteEngine On RewriteRule ^articles/([0-9]+)/ article.php?id=$1
When I click on a comment section for a given entry on a site I have, the URL looks like this:http://www...com/.../comments/index.php?submission=Portugal%20Crushes%20North%20Korea&submissionid=62&url=nytimes.com/2010/06/22/sports/soccer/22portugalgame.html?hpw&countcomments=3&submittor=johnjohn12&submissiondate=2010-06-21%2019:00:07&dispurl=nytimes.comI want to make it look like this URL:http://www...com/.../comments/Portugal-Crushes-North-Korea-62I understand that this involves adding rules to the .htaccess file. I have two questions:Since I am using the GET method in PHP, the ugly URL has a bunch of variables appended to it. I don't want all of these variables to appear in the clean URL. Is it possible to only include a few of the variables in the clean URL but still have a rule directing it to an ugly URL with all of the variables?Once I have the .htaccess rules written, do I go back and change the links in the source code to direct to the clean URLs? If so, how do I do this using the GET method when the clean URL does not have all of the variables that I want to pass along?Thanks in advance,John
Doing a URL re-write while using PHP GET method
You need to include thetokenizeprocessor and include the propertytokenize_pretokenizedset toTrue. This will assume the text is tokenized on whitespace and sentence split by newline. You can also past a list of lists of strings, each list representing a sentence, and the entries being the tokens.This is explained here:https://stanfordnlp.github.io/stanza/tokenize.htmlShareFollowansweredAug 31, 2020 at 22:30StanfordNLPHelpStanfordNLPHelp8,70911 gold badge1111 silver badges99 bronze badgesAdd a comment|
I have a tokenized file and I would like to use StanfordNLP to annotate it with POS and dependency parsing tags. I am using a Python script with the following configuration:config = { 'processors': 'pos,lemma,depparse', 'lang': 'de', 'pos_model_path': './de_gsd_models/de_gsd_tagger.pt', 'pos_pretrain_path': './de_gsd_models/de_gsd.pretrain.pt', 'lemma_model_path': './de_gsd_models/de_gsd_lemmatizer.pt', 'depparse_model_path': './de_gsd_models/de_gsd_parser.pt', 'depparse_pretrain_path': './de_gsd_models/de_gsd.pretrain.pt}'nlp = stanfordnlp.Pipeline(**config)doc = nlp(text)However, I receive the following message:missing: {'tokenize'} The processors list provided for this pipeline is invalid. Please make sure all prerequisites are met for every processor.Is it possible to skip the tokenization step using a Python script?Thanks in advance!
How can I use StanfordNLP tools (POSTagger and Parser) with an already Tokenized file?
If you have several users, I would suggest identifying them withservice accounts.Once you've created service accounts for every user, you can assign them to Pods with thespec.serviceAccountNamekeyword. This field is available inside Pods using theDownward Api. For example:apiVersion: v1 kind: Pod metadata: name: pod-name spec: containers: - name: container-name image: busybox command: [ "sh", "-c", "echo $SERVICE_ACCOUNT"] env: - name: SERVICE_ACCOUNT valueFrom: fieldRef: fieldPath: spec.serviceAccountName
I have 20 users. I need to use individual container for every user. I want to pass 'user_id' by environments. When i receive message, i need to create another one container with 'user_id', which i received. how to organize it in kubernetes by the best way
Can i create set of unique docker container with different environment by kubernetes?
I do something like this in the nginx config file on one of my sites and it works without a problem. I do not have anything in my ApplicationController to force the redirect either. server { listen 80; server_name my_website.co; rewrite ^ https://server_name$request_uri? permanent; } server { listen 80; server_name www.my_website.co; rewrite ^ https://server_name$request_uri? permanent; } server { listen 443; server_name my_website.co; root /home/deployer/my_website/public; ssl on; ssl_certificate /etc/nginx/certs/my_website.co.crt; ssl_certificate_key /etc/nginx/certs/my_website.co.private.key; // rest of your config file below }
I am fighting with this issue the whole day. Here's my nginx.cong: upstream my_website.co { server 127.0.0.1:8080; } server{ listen 80; listen 443 default ssl; # return 301 https://www.my_website.co; - I put it here, but it didn't work ssl on; ssl_certificate /etc/nginx/certs/my_website.co.crt; ssl_certificate_key /etc/nginx/certs/my_website.co.private.key; server_name my_website.co _; root /home/deployer/my_website/public; location / { proxy_set_header X_FORWARDED_PROTO $scheme; proxy_set_header Host $http_host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header CLIENT_IP $remote_addr; if (!-f $request_filename) { proxy_pass http://my_website.co; break; } if (-f $document_root/system/maintenance.html) { return 503; } } # return 301 https://www.my_website.co; - I put it here, but it didn't work Could you help me, please, how to redirect everything from http to https? My Rails code: ApplicationController before_filter :ensure_domain APP_DOMAIN = 'www.my_website.co' def ensure_domain if request.env['HTTP_HOST'] != APP_DOMAIN && Rails.env != 'development' redirect_to "https://#{APP_DOMAIN}#{request.env['REQUEST_PATH']}", :status => 301 end end I'll be grateful for every help, I am lost in here. Thank you
How to set up redirect from http to https with nginx?
You can manually dispose your 2nd level cache for a specific entity, entity type or a collection.Fromhttp://knol.google.com/k/fabio-maulo/nhibernate-chapter-16-improving/1nr4enxv3dpeq/19#For the second-level cache, there are methods defined on ISessionFactory for evicting the cached state of an instance, entire class, collection instance or entire collection role.sessionFactory.Evict(typeof(Cat), catId); //evict a particular Cat sessionFactory.Evict(typeof(Cat)); //evict all Cats sessionFactory.EvictCollection("Eg.Cat.Kittens", catId); //evict a particular collection of kittens sessionFactory.EvictCollection("Eg.Cat.Kittens"); //evict all kitten collections
I've got a web application that is 99% read-only, with a separate service that updates the database at specific intervals (like every 10 minutes). How can this service tell the application to invalidate it's second-level cache? Is it actually important? (I don't actually care if I have too much stale data) If I don't invalidate the cache how much time is needed to the records to get updated (if using SysCache)
NHibernate second-level cache with external updates
Yes, you are right. Pods on the same node are anyhow utilizing the same CPU and Memory resources and therefore are expected to go down in event of node failure.But, you need to consider it at pod level also. There can be situation where the pod itself gets failed but node is working fine. In such cases, multiple pods can help you serve better and make application highly available. From performance perspective also, more number of pods can serve requests faster, thereby dropping down latency issues for your application.
I'm new with Kubernetes, i'm testing with Minikube locally. I need some advice with Kubernetes's horizontal scaling.In the following scenario :Cluster composed of only 1 nodeThere is only 1 pod on this nodeOnly one application running on this podIs there a benefit of deploying new podon this node onlyto scale my application ?If i understand correctly, pod are sharing the system's resources. So if i deploy 2 pods instead of 1 on the same node, there will be no performance increase.There will be no availability increase either, because if the node fails, the two pods will also shut.Am i right about my two previous statements ?Thanks
Advantage of multiple pod on same node
3 My current workaround is to simply create a 'bug reporting' account, and share that account's access token with the source code. That remains the simplest solution, especially using a PAT (Personal Access Token). As I explained in "Where to store the personal access token from GitHub", using a PAT allows for an easy revocation if needed, without having to invalidate the account password. Share Improve this answer Follow answered Aug 6, 2018 at 4:47 VonCVonC 1.3m539539 gold badges4.6k4.6k silver badges5.4k5.4k bronze badges Add a comment  | 
I have an electron app that has a bug reporting feature. I would like this bug reporter to use the github API to create an issue automatically. Here is the catch, I don't want my users to create and use their own github account to do so. Is it possible to use the github API to create issues, without requiring an account? My current workaround is to simply create a 'bug reporting' account, and share that account's access token with the source code. That way, whenever anybody creates an issue it's listed under that user. Seems like a stretch, and I'm wondering if there is a better way to approach this problem.
Github, create issue on repository without requiring a github user account
This is a known bug in Kestrel RC1:https://github.com/aspnet/KestrelHttpServer/issues/341You can work around it by forcingConnection: keep-alive:proxy_set_header Connection keep-alive;
I am trying to get nginx, ASP.NET 5, Docker and Docker Compose working together on my development environment but I cannot see it working so far.Thisis the state where I am now and let me briefly explain here as well.I have the following docker-compose.yml file:webapp: build: . dockerfile: docker-webapp.dockerfile container_name: hasample_webapp ports: - "5090:5090" nginx: build: . dockerfile: docker-nginx.dockerfile container_name: hasample_nginx ports: - "5000:80" links: - webapp:webappdocker-nginx.dockerfilefile:FROM nginx COPY ./nginx.conf /etc/nginx/nginx.confanddocker-webapp.dockerfilefile:FROM microsoft/aspnet:1.0.0-rc1-update1 COPY ./WebApp/project.json /app/WebApp/ COPY ./NuGet.Config /app/ COPY ./global.json /app/ WORKDIR /app/WebApp RUN ["dnu", "restore"] ADD ./WebApp /app/WebApp/ EXPOSE 5090 ENTRYPOINT ["dnx", "run"]nginx.conffile:worker_processes 4; events { worker_connections 1024; } http { upstream web-app { server webapp:5090; } server { listen 80; location / { proxy_pass http://web-app; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection 'upgrade'; proxy_set_header Host $host; proxy_cache_bypass $http_upgrade; } } }All good and when I rundocker-compose up, it gets two containers up and running which is also nice. From the host, when I hitlocalhost:5000, the request just hangs and when I terminate the request, nginx writes out a log through docker compose indicating499HTTP response.Any idea what I might be missing here?Update:I added some logging to ASP.NET 5 app and when I hit localhost:5000, I can verify that request is being send to ASP.NET 5 but it'sbeing terminated immediatelygiving a healthy response judging from the 200 response. Then, nginx sits on it util I terminate the request through the client.
Request hangs for nginx reverse proxy to an ASP.NET 5 web application on docker
The brief answer is you should never do such a thing as your API key will be exposed to the public. The correct way of doing this is specifying environmental variables in your deployment environment, which you'll reference in your code. As every cloud Steamlit has an established approach for this, see their docs here. Hope this helps!
I need to upload a Python script of a web application including an OpenAI API key to my GitHub Repository to deploy it in the Streamlit community cloud. But, when I deploy it, it works correctly only the first time. Since OpenAI recognizes it as a security breach. I get an email notification from OpenAI as below. I get the following error when I Run the Web App in Streamlit, in the log. raise self.handle_error_response( openai.error.AuthenticationError: <empty message> 2023-05-16 10:35:17.560 Uncaught app exception Traceback (most recent call last): ```
How to to upload a Python code including an OpenAI API key to my GitHub Repository with out OpenAI recognizing it as a Security leak and disable API
Stack-based storage is reclaimed as soon as the function call in which it resides returns.Is it possible that you were using heap-allocated memory (i.e. callingnew) within your recursive function? Alternatively, if you're simply looking at the Windows Task Manager or an equivalent, you may be seeing "peak" usage, or seeing some delay between memory being freed by your program and returned to the OS memory pool.Follow up (after question was edited):It's not clear what you are doing with thepair<Move*, Piece*>, so I can't tell if the Move objects need to be held by pointer or not. The primary reasons for holding them via pointer would be polymorphism (not used here since you don't appear to be creating subclass objects) and to allow their lifetime to be independent of the call stack. Sounds like you don't have that reason, either. So, why not:std::pair<Move, Piece*> pr(Move(x1,y1,x2,y2), aPiece);ShareFolloweditedMay 3, 2011 at 1:15answeredMay 2, 2011 at 20:50Drew HallDrew Hall28.7k1212 gold badges6464 silver badges8181 bronze badges1yea I found this new statement that might be the problem...(check updated description above)–user734027May 2, 2011 at 21:24Add a comment|
I was implementing a chess bot in c++ using recursive algorithms and the program evaluates over a million nodes per move.Over time the memory it takes up gets to over 1 GIG of RAM...But I don't really need the variables that were previously declared after I'm done with the move...So how do I manually flush the stack memory to get rid of the previously declared variables on the stack just like java's garbage collector?UPDATEI found out that there's this line in my source:Move * M = new Move(x1,y1,x2,y2); pair <Move *, Piece *> pr (M,aPiece);and it's in the perform move function which gets called a million times in the recursion...My question is, how would you clear such variable once all the recursion is done and I no longer need this variable, but while the recursion is doing its thing, I need that variable to stay in the memory?
clearing memory allocated in stack in c++
Below should work. RemovedmatchLabelsapiVersion: v1 kind: Service metadata: name: gettime labels: app: jexxa spec: selector: app: jexxa type: LoadBalancer ports: - port: 7000 targetPort: 7000
I´m getting the errorgot "map", expected "string",when I try to apply a service.yaml via..kubectl apply -f service.yamlHere is my service.yamlapiVersion: v1 kind: Service metadata: name: gettime labels: app: jexxa spec: selector: matchLabels: app: jexxa type: LoadBalancer ports: - port: 7000 targetPort: 7000and here the whole error message :error: error validating "service.yaml": error validating data: >io.k8s.api.core.v1.ServiceSpec.selector: got "map", expected "string", >ValidationError(Service.spec.selector.ports): invalid type for >io.k8s.api.core.v1.ServiceSpec.selector: got "array", expected "string"]; if you choose to ignore >these errors, turn validation off with --validate=falseI also tried it with--validate=falsebut it didnt work..
Im getting the error got "map", expected "string" on kubernetes service yaml
A change to the Template will only show up when that Template is used to stamp out new replicas. A change outside of the Template (replicas/selector) will be enacted immediately. If you want to gracefully change the PodSpec or labels of already existing Pods, you should take a look at the Rolling Update functionality ofDeployments.ShareFollowansweredApr 28, 2016 at 20:24CJ CullenCJ Cullen5,54211 gold badge2626 silver badges3434 bronze badgesAdd a comment|
I have aReplicaSetdefined in a yaml file which was used to create 2 pods (replicas). It is my understanding that changes in thespecsection ofReplicaSetwill be interpreted as changes in the desired state that will eventually get applied to the real world. For example, PATCHing the number of replicas with:curl --request PATCH \ --header 'Content-Type: application/strategic-merge-patch+json' \ --data '{"spec":{"replicas":3}}' \ http://localhost:8080/apis/extensions/v1beta1/namespaces/default/replicasets/hello-v2causes the number of pods to change. However, if I patch the labels to add a label:curl --request PATCH \ --header 'Content-Type: application/strategic-merge-patch+json' \ --data '{"spec": {"template": {"metadata":{"labels":{"active":"true"}}}}}' \ http://localhost:8080/apis/extensions/v1beta1/namespaces/default/replicasets/hello-v2I don't see this change take place on existing pods. New pods (created, for example by scaling the ReplicaSetdocontain the new label.When does a change to aspecimpact the current state and when does it not?
Why doesn't change in .spec.template.metadata.labels for ReplicaSet impact pods
Well, I don't think Sonarqube supports that. The only thing that I see you could do, is to run the memory profiler as you are doing, but instead of uploading to sonarqube as per your approach, you could create a html report from the memory profiler results and attach it to your Jenkins build.
I am evaluating python memory profiling. I would like to automate memory leak profiling with Jenkins and publish the report to Sonarqube. The current memory tool I am using is memory_profiler. Does Jenkins & Sonarqube support this integration? Or are there any python memory tools which I should consider which can integrate well into Jenkins & Sonarqube? Thanks
Python memory profiler with Sonarqube & Jenkins
You need to remplace postgresql-devel by postgresql92-devel or postgresql93-devel
I have a configuration file in .ebextensions/packages.config. packages: yum: postgresql-devel: [] When I deploy on AWS ElacticBeanstalk, I have this error : [Instance: i-195762fc Module: AWSEBAutoScalingGroup ConfigSet: null] Command failed on instance. Return code: 1 Output: [CMD-AppDeploy/AppDeployStage0/EbExtensionPreBuild] command failed with error code 1: Error occurred during build: Yum does not have postgresql-devel available for installation. If you have an idea of the error I have made, I would be very grateful. Thank you.
AWS Deployment with Rails - configuration file in .ebextensions
Prometheus kubernetes metrics all begins with kube_[SOMETHING], if you have datas exported to prometheus connect go to the prometheus interface and try typing kube in it, it will propose you autocmopletion with available metrics.
I am trying to set up some alerts in Prometheus. I am able to create the alerts for nodes for the following category (network utilization, CPU usage, memory usage). I am stuck with the pods.Which metrics should I use for PODs/Containers/clusters alert rule?
Custom alert rule for PODS and Clusters
you can download all the certificate from the Apple developer portal. Everything is explained there step-by-step. You have to have the developer account activated before you can proceed.
I need to enable push notification services for my app.When i followed this linkhttps://www.parse.com/tutorials/ios-push-notificationsit makes me to download provisioning certificate from my developer program id.I followed each and every step to download it.But when i try to install it on keychain access it shows error as "key chain warning The “System Roots” keychain cannot be modified"..When i try to google it the issue which i'm facing it leads to download developer certificate and WWDRCA certificate.I dont know where to get it.And my certificate looks like the image belowCan any one help please?..Is there any tutorial helping from basic step by step to get it done which includes developer certificate and WWDRCA certificate..And another question is what is the purpose of developer certificate and WWDRCA certificate?...How it helps to push notification provisioning certificate...
Pushnotification provisioning certificate issue in my certificates
I would like to have a copy of the github repository on my account, and not just in the runner's "container".That would be better address by amirroringGitHub Action, likewearerequired/git-mirror-action, or better, in your case (using tokens):pkgstore/github-action-mirrorname: "Repository Mirror: GitHub" on: schedule: - cron: "*/5 * * * *" workflow_dispatch: jobs: mirror: runs-on: ubuntu-latest name: "Mirror" steps: - uses: pkgstore/github-action-mirror@main with: source_repo: "https://github.com/${{ github.repository }}.git" source_user: "${{ secrets.MIRROR_SOURCE_USER_GITHUB }}" source_token: "${{ secrets.MIRROR_SOURCE_TOKEN_GITHUB }}" target_repo: "${{ secrets.MIRROR_TARGET_URL_GITHUB }}" target_user: "${{ secrets.MIRROR_TARGET_USER_GITHUB }}" target_token: "${{ secrets.MIRROR_TARGET_TOKEN_GITHUB }}"That way, you can send the source repository to aprivaterepository of yours.
I am trying to set a GitHub action that periodically clones an external repository (e.g.,targetuser/targetrepoand for which I have a personal access token).The GitHub action runs smoothly, but I have no cluewherethe repository is being cloned: I cannot see it in my GitHub account.Also, I would like the cloned repository to be set as private.This is mymain.ymlfile based onthis response:name: mainAction on: schedule: - cron: "*/5 * * * *" workflow_dispatch: jobs: copyRepo: runs-on: macOS-latest steps: - name: Checkout uses: actions/checkout@v2 - name: Copy repo env: ACCESS_TOKEN: ${{ secrets.ACCESS_TOKEN }} run: git clone "https://"$ACCESS_TOKEN"@github.com/targetuser/targetrepo.git"EditI would like to have a copy of the GitHub repository on my account, and not just in the runner's "container".
Periodically clone external repo with github actions and set it as a private repo
That is shell syntax so you need to run a shell to interpret it.command: - sh - -c - | exec /opt/tools/Linux/jdk/openjdk1.8.0_181_x64/bin/java -XX:MaxRAM=$(( $(cat /sys/fs/cgroup/memory/memory.limit_in_bytes) * 70/100 ))ShareFollowansweredApr 30, 2020 at 4:34coderangercoderanger53.1k44 gold badges5454 silver badges7777 bronze badgesAdd a comment|
is it possible to pass a function as the value in a K8s' pod command for evaluation? I am passing in JVM arguments to set the MaxRAM parameter and would like to read the cgroups memory to ascertain a value for the argumentThis is an example of what I'm trying to do- command: - /opt/tools/Linux/jdk/openjdk1.8.0_181_x64/bin/java - -XX:MaxRAM=$(( $(cat /sys/fs/cgroup/memory/memory.limit_in_bytes) * 70/100 ))Unfortunately the above doesn't work and fails with the following error:Improperly specified VM option 'MaxRAM=$(( $(cat /sys/fs/cgroup/memory/memory.limit_in_bytes) * 100 / 70 ))' Error: Could not create the Java Virtual Machine. Error: A fatal exception has occurred. Program will exit.Is this doable? If so, what's the right way to do it? Thanks!
Pass in function to be evaluated for K8s Commands list